Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.

Dismiss

196 views

Skip to first unread message

Feb 21, 2000, 3:00:00â€¯AM2/21/00

to

In article <88cdrp$v3o$1...@nnrp1.deja.com>, torqu...@my-deja.com wrote:

> I'm pretty familiar with quantum computing from the point of view of

> theory but I'm pretty poor at making a judgement of how well theory

> translates into practicalities (I'm just a mathematician :-).

>

> What is the current feeling about the likelihood of quantum

> computing, beyond calculations involving just a handful of

> bits, becoming a reality, especially for computations that

> are inherently digital (such as factoring integers)? I'm

> interested in views from workers in the field rather than

> just lay opinion.

My standard answer to this question is "about thirty years."

Of course, I've been saying that for almost 5 years, so maybe

I should reduce it :-).

Really, this is just a way of saying I don't know. There are

proposals that have the potential to go the distance, but they're

well beyond anything we can build today, which means unanticipated

problems will certainly arise. The question is whether those

problems can be defeated. On the other hand, someone might have

a really bright idea tomorrow, moving up the timeline a lot. And

smaller-scale things, like widespread quantum cryptography or

simulations of moderate-sized quantum systems, would be sooner.

Thirty years seems like a reasonable timescale to me -- too far

away to pin down a detailed prediction, but not completely out

of reach. It also corresponds roughly to the point where Moore's

Law hits the atomic scale, which is plausibly a point where

quantum computation would become relatively easy (or at least not

much harder than classical computation). There's also the

possibility that the whole thing will be more trouble than it's

worth, and we'll never get big quantum computers.

Daniel Gottesman

gott...@worldnet.att.net

Feb 23, 2000, 3:00:00â€¯AM2/23/00

to

Daniel Gottesman wrote:

[..]

> There's also the

> possibility that the whole thing will be more trouble than it's

> worth, and we'll never get big quantum computers.

>

I heard there is a super-efficient prime factorization algorithm.

This may not be of use to benevolent people, but it could be a

threat to security. So that means governments and banks will have

to put research money in, to keep them from being suprise-attacked.

(I tried to get myself a job once using this argument,

but it didn't work)

Gerard

Feb 24, 2000, 3:00:00â€¯AM2/24/00

to

> My standard answer to this question is "about thirty years."

> Really, this is just a way of saying I don't know. There are

> proposals that have the potential to go the distance, but they're

> well beyond anything we can build today

> proposals that have the potential to go the distance, but they're

> well beyond anything we can build today

That sounds like you're saying the problems are purely practical ones -

we humans are pretty good at overcoming those! However Unruh, I believe,

argued a few years back that on theoretical grounds significant quantum

computers will always be thermodynamically impossible. I'm wondering

whether these arguments still hold much sway or whether Unruh's

arguments can be defeated by quantum error correction.

--

Torque

http://travel.to/tanelorn

Sent via Deja.com http://www.deja.com/

Before you buy.

Feb 24, 2000, 3:00:00â€¯AM2/24/00

to

Hey,

Quantum coherence of a single qubit has been observed using a

superconducting quantum dot---in this case, the two states differ by a

single Cooper pair. It was published in Nature by some guy at NEC in

Japan... don't have the reference handy. Anyway, its not clear if this

will scale very well, but it seems like there are other

possibilities---like building qubits using superconducting junctions,

i.e. SND (s-wave / normal metal /d wave).

With regard to quantum error correction, maintaining coherence long

enough for a few thousand gates to operate would get one pretty far.

I'd say the first solid state quantum computer may not be that long

away, but first we'll see some further demonstrations of coherence in

single qubit systems. Too bad, the universal quantum gate takes at

minimum 2 qubits. So the next milestone will be implementing quantum

gates---its easy to say, "so we have this unitary operator....", but

another thing to acually do it. It's quite particular to the actual

hardware.

Quantum coherence of a single qubit has been observed using a

superconducting quantum dot---in this case, the two states differ by a

single Cooper pair. It was published in Nature by some guy at NEC in

Japan... don't have the reference handy. Anyway, its not clear if this

will scale very well, but it seems like there are other

possibilities---like building qubits using superconducting junctions,

i.e. SND (s-wave / normal metal /d wave).

With regard to quantum error correction, maintaining coherence long

enough for a few thousand gates to operate would get one pretty far.

I'd say the first solid state quantum computer may not be that long

away, but first we'll see some further demonstrations of coherence in

single qubit systems. Too bad, the universal quantum gate takes at

minimum 2 qubits. So the next milestone will be implementing quantum

gates---its easy to say, "so we have this unitary operator....", but

another thing to acually do it. It's quite particular to the actual

hardware.

Feb 25, 2000, 3:00:00â€¯AM2/25/00

to

In article <gottesma-190...@140.seattle-01-02rs.wa.dial-access.att.net>, gott...@worldnet.att.net (Daniel Gottesman) wrote:

>In article <88cdrp$v3o$1...@nnrp1.deja.com>, torqu...@my-deja.com wrote:

>

>> I'm pretty familiar with quantum computing from the point of view of

>> theory but I'm pretty poor at making a judgement of how well theory

>> translates into practicalities (I'm just a mathematician :-).

>>

>> What is the current feeling about the likelihood of quantum

>> computing, beyond calculations involving just a handful of

>> bits, becoming a reality, especially for computations that

>> are inherently digital (such as factoring integers)? I'm

>> interested in views from workers in the field rather than

>> just lay opinion.

>

>In article <88cdrp$v3o$1...@nnrp1.deja.com>, torqu...@my-deja.com wrote:

>

>> I'm pretty familiar with quantum computing from the point of view of

>> theory but I'm pretty poor at making a judgement of how well theory

>> translates into practicalities (I'm just a mathematician :-).

>>

>> What is the current feeling about the likelihood of quantum

>> computing, beyond calculations involving just a handful of

>> bits, becoming a reality, especially for computations that

>> are inherently digital (such as factoring integers)? I'm

>> interested in views from workers in the field rather than

>> just lay opinion.

>

>My standard answer to this question is "about thirty years."

>Of course, I've been saying that for almost 5 years, so maybe

>I should reduce it :-).

>I should reduce it :-).

I don't work in the field. But I do recall that 25-30 years was the

timescale often given by those working on commercial fusion reactors.

That was 20 years ago.

- Gerry Quinn

Feb 25, 2000, 3:00:00â€¯AM2/25/00

to

torqu...@my-deja.com wrote:

> That sounds like you're saying the problems are purely practical ones -

> we humans are pretty good at overcoming those! However Unruh, I believe,

> argued a few years back that on theoretical grounds significant quantum

> computers will always be thermodynamically impossible. I'm wondering

> whether these arguments still hold much sway or whether Unruh's

> arguments can be defeated by quantum error correction.

I believe the paper you are referring to is hep-th/9406058.

Abstract:

The effect of the inevitable coupling to external degrees of freedom of

a quantum computer are examined. It is found that for quantum calculations

(in which the maintenance of coherence over a large number of states is

important), not only must the coupling be small but the time taken in the

quantum calculation must be less than the thermal time scale, hbar/Kb T.

For longer times the condition on the strength of the coupling to the

external world becomes much more stringent.

I'm also curious as to how well this argument holds up.

[Moderator's note: I had to edit this post to keep the lines from

wrapping around. Please make sure that in the future you keep your

lines about 72 characters long, or less. - jb]

Feb 28, 2000, 3:00:00â€¯AM2/28/00

to

In article <893r7r$1na$1...@nnrp1.deja.com>, torqu...@my-deja.com wrote:

> > My standard answer to this question is "about thirty years."

> > Really, this is just a way of saying I don't know. There are

> > proposals that have the potential to go the distance, but they're

> > well beyond anything we can build today

>

> That sounds like you're saying the problems are purely practical ones -

> we humans are pretty good at overcoming those!

Well, yes and no. As I said elsewhere in my post, the question is

whether it's worth the trouble. If the problems to be faced are

sufficiently great, and the payoff insufficiently large, no one will

bother. It seems certain that for the near future, people will

continue to push the technology forward, if only to show we can (and

learn whatever we can in the process of doing so). It's less clear (but

still reasonably likely, I would say) that it will be pushed far enough

to build large quantum computers.

> However Unruh, I believe,

> argued a few years back that on theoretical grounds significant quantum

> computers will always be thermodynamically impossible. I'm wondering

> whether these arguments still hold much sway or whether Unruh's

> arguments can be defeated by quantum error correction.

Quantum error correction does indeed handle them. Unruh's arguments

were not particularly profound -- basically, he took a simple model

of the environment and showed that, without error correction, the

state does indeed decay. The environment interacts with any quantum

computer, introducing entropy. This issue is not in *any* way unique

to quantum computers -- any classical computer has the same problem.

In practice, real classical computers are too reliable (in the

hardware, at least) for us to have to worry about this, while for

a variety of reasons, error rates in a quantum computer would probably

be higher.

Quantum error correction, however, acts as a refrigerator, pumping

that entropy into a stream of fresh qubits, which can then be discarded.

It can be shown, in fact, that if errors occur independently on different

qubits, or with only weak correlations, and if the error rate is low

enough, we can perform arbitrarily long quantum computations with an

only modest increase in overhead. "Modest" in this case is the computer

science meaning of small asymptotic growth in the overhead (a power of

the logarithm of the length of the calculation). The constant factor,

unfortunately, is currently pretty large -- all told, perhaps a factor

of 100 increase in the number of qubits.

But yes, basically, we have solved the problems of building a quantum

computer in *principle*, short of a fundamental breakdown of quantum

mechnics. What about other kinds of errors, highly correlated ones,

for instance? Again, they are a problem of practice, not principle,

and if there aren't too many types of error, we can sometimes deal

with those too.

Daniel Gottesman

gott...@worldnet.att.net

Feb 28, 2000, 3:00:00â€¯AM2/28/00

to

In <gottesma-250...@3.seattle-15-20rs.wa.dial-access.att.net>,

Daniel Gottesman <gott...@worldnet.att.net> wrote:

Daniel Gottesman <gott...@worldnet.att.net> wrote:

> But yes, basically, we have solved the problems of

> building a quantum computer in *principle*,

> short of a fundamental breakdown of quantum mechanics.

I would not call it "breakdown" unless there is something to break in

the range of precision required. Exponential summations used in QC

require hundreds if not millions of decimal places accuracy. I wonder

who would ever expect any physical theory to make sense in this realm.

I suspect the physicists would doubt even the laws of arithmetic pushed

that far :-). Are quantum amplitudes still complex numbers to this

accuracy or they become octonions or colored graphs?

QC of the sort that factors long numbers seems to me firmly rooted in

Sci-Fi. It is pity that people do not distinguish it from much more

believable ideas, like Quantum Cryptography, Quantum Communications, and

the sort of Quantum Computing that deals only with locality effects,

like fast searching long arrays.

> What about other kinds of errors, highly correlated ones, for

> instance? Again, they are a problem of practice, not principle,

Even this seems unclear. QC depends on devious correlations between

q-bits. Why would not the errors be infected with such correlations?

Leonid Levin.

Mar 1, 2000, 3:00:00â€¯AM3/1/00

to

In article <89efau$2um$1...@news3.bu.edu>, L...@bu.edu wrote:

> In <gottesma-250...@3.seattle-15-20rs.wa.dial-access.att.net>,

> Daniel Gottesman <gott...@worldnet.att.net> wrote:

>

> > But yes, basically, we have solved the problems of

> > building a quantum computer in *principle*,

> > short of a fundamental breakdown of quantum mechanics.

>

> I would not call it "breakdown" unless there is something to break in

> the range of precision required. Exponential summations used in QC

> require hundreds if not millions of decimal places accuracy.

There's a subtle distinction that needs to be made here. When we're

talking about encoding data in a quantum error-correcting code, we

must distinguish between huge precision on the physical qubits, which

is not needed, from huge precision on the logical qubits, which *is*

needed, but follows from fault-tolerance if quantum mechanics is

approximately correct. You can always cook up theories where it

won't work, because it *is* an untested regime, but there is no

reason to believe that what's happening at the Planck scale is at

all relevant to the *logical* qubits. That's why I say it would

take a fundamental breakdown of quantum mechanics.

> I wonder

> who would ever expect any physical theory to make sense in this realm.

> I suspect the physicists would doubt even the laws of arithmetic pushed

> that far :-).

Clearly you mean this as a joke, but I think it really is the same

question. Classical computers can do arithmetic with millions

of decimal places accuracy without knowing a thing about quantum

mechanics, but the currents that make up the bits are not classical

numbers to millions of decimal places, or even tens of decimal

places -- they are quantum states. So yes, arithmetic fails if

you push it too far, but only if you push it in the wrong direction.

> > What about other kinds of errors, highly correlated ones, for

> > instance? Again, they are a problem of practice, not principle,

>

> Even this seems unclear. QC depends on devious correlations between

> q-bits. Why would not the errors be infected with such correlations?

Well, for starters, because we have to work so hard to produce those

correlations in the first place. If there were some ubiquitous

natural process that was making them all the time, quantum computation

might well be a lot easier, not harder. The assumption we make is

that the noise works on the same basic rules as we do. We build our

correlations bit by bit out of gates acting on two or three qubits

at a time. The assumption is that the noise must try to tear them

apart the same way, two or three qubits at a time.

Perhaps it's dangerous to make a philosophical observation, but I

think it's worthwhile. When quantum mechnics was first discovered

early in the 20th century, it shook the foundations of physics.

While it in no way invalidated all the physics that had gone before,

many people found it profoundly disturbing (and many still do today).

Some, such as Einstein, Podalsky, and Rosen, went so far as to

propose experiments where they felt quantum mechanics *must* fail.

Still, quantum mechanics has so far weathered all the tests it has

been put to.

Quantum computation, and Shor's factoring algorithm in particular,

is now forcing computer science to confront quantum mechanics. Many

computer scientists find it profoundly disturbing; some suggest that

it must fail. There's no guarantee that quantum mechanics will

survive this round of tests, but I think we should bear in mind one

lesson from a century of quantum mechanics: a theory can be disturbing

and still be correct.

Daniel Gottesman

gott...@worldnet.att.net

Mar 1, 2000, 3:00:00â€¯AM3/1/00

to

Mar 3, 2000, 3:00:00â€¯AM3/3/00

to

Daniel Gottesman <gott...@worldnet.att.net> wrote in message

news:89ijab$qg0$1...@info.service.rug.nl...

> > > But yes, basically, we have solved the problems of

> > > building a quantum computer in *principle*,

> > > short of a fundamental breakdown of quantum mechanics.

news:89ijab$qg0$1...@info.service.rug.nl...

> > > But yes, basically, we have solved the problems of

> > > building a quantum computer in *principle*,

> > > short of a fundamental breakdown of quantum mechanics.

> > I would not call it "breakdown" unless there is something to break in

> > the range of precision required. Exponential summations used in QC

> > require hundreds if not millions of decimal places accuracy.

> There's a subtle distinction that needs to be made here. When we're

> talking about encoding data in a quantum error-correcting code, we

> must distinguish between huge precision on the physical qubits, which

> is not needed, from huge precision on the logical qubits, which *is*

> needed, but follows from fault-tolerance if quantum mechanics is

> approximately correct. You can always cook up theories where it

> won't work, because it *is* an untested regime, but there is no

> reason to believe that what's happening at the Planck scale is at

> all relevant to the *logical* qubits. That's why I say it would

> take a fundamental breakdown of quantum mechanics.

How many 'different' superpositions can exist in a qubit?

A handful, certainly. An infinity? 10exp43? 10exp100?

Dirk

Mar 4, 2000, 3:00:00â€¯AM3/4/00

to

In <89ijab$qg0$1...@info.service.rug.nl>,

Daniel Gottesman <gott...@worldnet.att.net> wrote:

Daniel Gottesman <gott...@worldnet.att.net> wrote:

> in a quantum error-correcting code, we must distinguish between

> huge precision on the physical qubits, which is not needed,

> from huge precision on the logical qubits, which *is* needed,

It is misleading to call them "logical": they are analog, not digital;

their amplitudes may rotate very gradually for all sorts of reasons.

> but follows from fault-tolerance if quantum mechanics is

> approximately correct.

Not really, QC "fault-tolerance" is limited to special classes of faults

and assumes other equations to hold to ridiculous number of decimals.

> You can always cook up theories where it won't work,

> because it *is* an untested regime,

Quite so! There have never been any reason to assume any physical theory

correct to hundreds or millions of decimal places. In fact, we know the

fundamental theories are wrong beyond quite small number of digits.

> but there is no reason to believe that what's happening at

> the Planck scale is at all relevant to the *logical* qubits.

Plank scale is HUGE (an understatement) compared to the precision QC needs.

> > > What about other kinds of errors, highly correlated ones, for

> > > instance? Again, they are a problem of practice, not principle,

> > Even this seems unclear. QC depends on devious correlations between

> > q-bits. Why would not the errors be infected with such correlations?

> Well, for starters, because we have to work so hard to produce those

> correlations in the first place.

And our hard work may hurt ourselves by infecting the errors with

devious correlations. The human viruses are spin-offs of human genes

which are efficient due to hard work human evolution did on the genes.

Terrorists use achievements of civilization to try to bring it down,

though they would not be able to invent plastic bombs by themselves.

I emphasize that I mention only some of the many problems of QC, simply

by reacting to the points you make. People in this area tend to sweep

great many things under the rug, not unlike cold fusion folks did.

It is a pity because some other achievements based on similar ideas

(e.g., Quantum Cryptography) are quite exciting. Even Peter Shor's

result itself is quite a wonderful theorem, though to me its more

relevant implication is the opposite to what he and other QC folks

offer. In computer theory we rarely can prove something is impossible.

Mostly we content ourself with reductions saying "This is extremely

unlikely to be possible because if it was, one could factor in

polynomial time." This is what Shor's theorem hints about the

possibility of building quantum computers.

> Quantum computation, and Shor's factoring algorithm in particular,

> is now forcing computer science to confront quantum mechanics.

Not really. Nobody expects QC to fail when only a few q-bits interact.

If a more serious quantum computer is build and does not work, nothing

will change in QM. The physicists would just say, "But who ever told you

QM amplitudes even HAVE so many meaningful digits." And the QC people

would say "The engineering must be not good enough yet."

So, this is not a falsifiable theory.

> Many computer scientists find it profoundly disturbing;

I see nothing DISTURBING in the believes that some physical values make

sense to millions of decimal places. It is just ... speculative.

-Leonid Levin.

Mar 6, 2000, 3:00:00â€¯AM3/6/00

to

In <gottesma-040...@248.seattle-11-12rs.wa.dial-access.att.net>,

Daniel Gottesman <gott...@worldnet.att.net> wrote:

Daniel Gottesman <gott...@worldnet.att.net> wrote:

> hundreds of decimal places are not necessary for any conceivable

> computation. The most precision you ever need is 1/T for a

> computation of T steps -- that is, log T bits of precision.

Not AFAIK. Indeed, SOME restricted types of deviations of this magnitude

are allowed. But some equations (e.g., linearity) are required with FAR

greater precision. If you could restate the theory in such a way that

the devil can come at any times and alter the 500-th decimal places of

all amplitudes in any way she likes, all would be much more reasonable.

> We assume merely that errors cannot magically jump from place to

> place, but can only follow the routes forged by the gates we perform

> (and of course new errors can appear at a low rate as well).

No, the QC error-correction assumes much more than that. I remember John

Baez posted long ago his proposal of what would be a generic type of QC

errors. This was much more demanding than anything QC people allow, but

far too lenient in my eyes. John grants, the deviations are in fact

errors, violations of some laws which actually exist. He grants, some

physical values make sense to the decimal places which cannot be probed

using the resources of the entire universe. (The Universe quickly

becomes quite wimpy when exponents come into play.) He assumes they obey

laws which have some resemblance to the physics acting in more sensible

realms. This is too speculative even for such a Sci Fi fan as I am.

> By the time cold fusion was 5 years old, it was already completely

> and thoroughly discredited. Shor's algorithm is 6 years old, and

> quantum error correction is 5. Quantum computation, as a field,

> is still growing rapidly. Which is not to say it's necessarily

> right; but it is *certainly* not pathological science.

Cold fusion was discredited because, unlike QC, it was falsifiable.

Shor's algorithm is a wonderful theorem and cannot be discredited as

such. Quantum gates experiments are wonderful as well, and quite useful

for things other than QC, for example for quantum cryptography.

It is only the speculation that QC would be build some day with the

power to factor large numbers which is bogus. (I ignore, of course, the

chance that factoring may turned out to take polynomial time on ordinary

computers.) But this is not a falsifiable claim.

> A fundamental failure of quantum computation would be every bit as

> disturbing to physics as its success would be to theoretical computer

> science, probably more so.

I do not see that at all. I have yet to meet a usual physicist who

believes in any physical laws for millions (or just more than a few

dozens) of decimal places. In fact, the physicists KNOW the fundamental

laws are false in these realms. -Leonid Levin.

Mar 6, 2000, 3:00:00â€¯AM3/6/00

to

In article <8a0k28$aft$1...@news3.bu.edu>, l...@csb.bu.edu (Levin) wrote:

> In <gottesma-040...@248.seattle-11-12rs.wa.dial-access.att.net>,

> Daniel Gottesman <gott...@worldnet.att.net> wrote:

>

> > hundreds of decimal places are not necessary for any conceivable

> > computation. The most precision you ever need is 1/T for a

> > computation of T steps -- that is, log T bits of precision.

>

> Not AFAIK. Indeed, SOME restricted types of deviations of this magnitude

> are allowed. But some equations (e.g., linearity) are required with FAR

> greater precision.

No, they are not. We know linearity and all other laws of quantum

mechanics are at least approximately true. Let us fix, for the sake

of convenience, some degree of accuracy to which this approximation

is correct -- say, 20 digits. Now, errors below this point can do

whatever they like. Above this threshold, they must obey the laws

of quantum mechanics. But once they do obey those laws, it's an easy

theorem to show that it will take 10^20 steps before those errors can

possibly affect the computation.

Each step of the computation produces the new set of amplitudes

by adding together pairs of the old amplitudes (or perhaps sets of

four or eight for two- or three-qubit gates, but in any case sets

of bounded size). It's easy to contrive models in which

the errors ignore this fact and increase 20 orders of magnitude

in size in one step, but clearly it's not possible to prove

*anything* if you're not allowed to make *any* assumptions.

Since you refer to millions of decimal places rather than

a googol, I assume you're willing to grant that they only

increase in size by at most one order of magnitude per step.

I'll even sketch a proof of the theorem. The correct evolution

is approximately unitary. An evolution which is incorrect by

\epsilon therefore takes the state to one which has inner product

of 1 - O(\epsilon) with the correct state (or even 1 + O(\epsilon)

if we relax the laws of quantum mechanics enough to allow that).

After T time steps, the inner product could be as low as

(1 - O(\epsilon)^T. This is still quite close to 1 until

1/T = O(\epsilon). For a more detailed proof, I refer you to

Bernstein and Vazirani's paper, "Quantum Complexity Theory,"

in the SIAM J. Computation special issue on quantum computation

(October 1997). The actual proof uses the norms of operators,

which allows you to deal with errors that produce mixed states,

but basically follows the same path. Yes, it uses linearity,

but not strongly, as I argued above. It doesn't even assume

unitarity, only that the errors are a good approximation to

the correct unitary evolution.

Daniel Gottesman

gott...@worldnet.att.net

Mar 8, 2000, 3:00:00â€¯AM3/8/00

to

Daniel Gottesman <gott...@worldnet.att.net> wrote:

>Please, give us a little credit. The central design criterion for

>fault-tolerant protocols is to prevent this from happening. We

>assume merely that errors cannot magically jump from place to place,

>but can only follow the routes forged by the gates we perform (and

>of course new errors can appear at a low rate as well). By

>arranging the gates properly, we can ensure that error correction

>will get a chance to correct the errors before they have an

>opportunity to spread too far. And yes, the error correction

>routines are themselves fault-tolerant.

Has it been shown that these routines are stable under *arbitrary

small perturbations of the Hamiltonian*, including perturbations

that couple the Hamiltonian with the outside world? This is the

kind of robustness that real-world machines must have.

(Perturbations coupling the computer with the outside world

will typically make pure states of the computer evolve to mixed

states. It's this kind of "environmentally induced decoherence"

that most quantum computation skeptics worry about.)

Mar 8, 2000, 3:00:00â€¯AM3/8/00

to

In <gottesma-060...@210.seattle-11-12rs.wa.dial-access.att.net>,

Daniel Gottesman <gott...@worldnet.att.net> wrote:

Daniel Gottesman <gott...@worldnet.att.net> wrote:

> We know linearity and all other laws of quantum

> mechanics are at least approximately true.

The question is what does this "approximately" means. Let me offer a

simplistic interpretation. Say, it means the erroneous evolution can

give a wrong answer with probability that increases by 1/1000000 per

step. Then we are safe for many steps but just because we assume it so.

This is about the same as the theorem you provided.

> Let us fix, for the sake of convenience, some degree of accuracy

> to which this approximation is correct -- say, 20 digits.

To this accuracy all these amplitudes are 0. Remember, your

events require very specific arrangements of great many q-bits.

The amplitudes are exponentially small, unobservable.

In the famous Feynman Challenger story he describes how he was briefed

about the reliability of the shuttle. It was so high in each little

part that he did not believe the numbers were possible to estimate

without cooking up. (He then multiplied them by the number of parts,

and concluded that one of those shuttles was BOUND to fail.)

> I'll even sketch a proof of the theorem. The correct evolution

> is approximately unitary. An evolution which is incorrect by

> \epsilon therefore takes the state to one which has inner product

> of 1 - O(\epsilon) with the correct state

The problem is that this 1 - \epsilon is much smaller than

1 over the size of the Universe.

> For a more detailed proof, I refer you to Bernstein and Vazirani's

> paper, "Quantum Complexity Theory," in the SIAM J. Computation

> special issue on quantum computation (October 1997).

The paper actually contains many more interesting things than

this theorem, and these authors are always worth reading.

So I provide the busy readers with the Web address of the paper:

http://epubs.siam.org/sam-bin/dbq/article/30092

Mar 10, 2000, 3:00:00â€¯AM3/10/00

to

In <Fr0w9...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

> suppose that the universe obeys quantum mechanics ... everything

> works linearly, with superpositions, just the way that we think it

> does. This doesn't mean the standard model of physics is right ...

> it means that the basic framework of quantum mechanics on which

> physicists hang quantum field theory, string theory, etc. is correct.

> suppose that the universe obeys quantum mechanics ... everything

> works linearly, with superpositions, just the way that we think it

> does. This doesn't mean the standard model of physics is right ...

> it means that the basic framework of quantum mechanics on which

> physicists hang quantum field theory, string theory, etc. is correct.

But we know that even the basic laws fail if you take them to hundreds

of digits. This is quite besides the details such as standard model.

> barring very strange quantum-coherent errors, quantum fault tolerance

> will work as advertised, and, if we can build them to the required

> accuracy of 10^(-4) or so, quantum computers will be able to factor.

Rounded to 10^{-4} (if not to 10^{-10^4} :-),

all amplitudes in your algorithm would be 0.

> Now, suppose that quantum computers can be built with the requisite

> accuracy, but quantum computers still fail to work, even using

> fault-tolerant circuits. Look at the operation of a such a machine

> trying to factor a large integer using (say) a billion steps. After

> each step, let's compare the state of the real machine to the state

> of an ideal computer in a world with linear quantum mechanics doing

> the same computation. The machines start in nearly identical states,

They do not. Look what you are saying: "Here is a box. It has a

remarkable property: its many q-bits contain the text of the Bible with

the amplitude which is exactly .0[a million digit file follows]".

What could you possibly even mean by that? What physical interpretation

could this statement have even for just this one amplitude?

> and end in substantially different ones. Thus, since there are only a

> billion steps, there must be one step where the change in the real

> state of the machine deviates from what linear quantum mechanics

> would predict by one part in a billion, which is only 9 decimal

> places. Thus, for quantum computation to fail on this factorization,

> we need a nonlinearity which appears in only the 9'th decimal places.

One inductive step may seem more believable than the whole, but they are

really the same. The argument reminds me of a theory popular among 18th

century revolutionaries in Russia. They rejected oppressive Christianity

but would not give up its hope for the resurrection of the dead.

So, they said: "The task to resurrect all who ever lived sounds

impossible even with the future advanced technology. But think of it,

it only takes each person to resurrect his parents."

> One objection to the above argument might be that we can't actually

> measure the state of our machine. However, we could theoretically

> run the computation many times and just look at the average state.

> This will work just as well for the above thought experiment,

> since if we assume we can run the machine exponentially many times,

> we can in theory measure the average state to any desired accuracy.

In principle, we can measure anything to any accuracy. However, for our

sins, we are confined to this puny Universe, which does not have enough

resources to measure anything exponential. (And there are whispers that

in larger universes our laws run into contradictions and do not work :).

> One objection that might be put forward, and I suspect Leonid Levin

> is intuitively thinking of, is that we're used to working with states

> close to the tensor product basis of the Hilbert space, and that

> quantum computation uses massively entangled states.

No, this would not be my concern until you start referring to hundreds

digits long numbers. Of course, close to the tensor product basis you

might have opportunities to restate you assertions using several short

measurable numbers instead of one long. Such opportunities may also

exist for some special large systems, such as lasers or condensates.

But your algorithm uses amplitudes of exponential number of highly

individualized basis states. I doubt anything short of the most generic

and direct use of these huge precisions would be easy to substitute.

>|>[John] grants, some physical values make sense to the decimal places

>|> which cannot be probed using the resources of the entire universe.

>|> (The Universe quickly becomes quite wimpy when exponents come into

>|> play.) He assumes they obey laws which have some resemblance to the

>|> physics acting in more sensible realms.

>|> This is too speculative even for such a Sci Fi fan as I am.

> Wait a minute ... I believe that

> if you change the 100th decimal place of ANY physical constant,

> there is no observable change in the universe whatsoever.

I am talking not of constants but of variables, e.g., state amplitudes.

I am not sure what physical meaning their 500-th decimal place may have.

>|> It is only the speculation that QC would be build some day with the

>|> power to factor large numbers which is bogus. (I ignore, of course,

>|> the chance that factoring may turned out to take polynomial time on

>|> ordinary computers.) But this is not a falsifiable claim.

> My algorithm is perfectly falsifiable ... if you build quantum gates

>accurate to 10^(-4), and string them together to make a fault-tolerant

> circuit for factoring a (say) 100-digit number, and if the computer

> doesn't factor it, then it's falsified.

The claim that a given machine can factor is falsifiable.

The claim that one could eventually build a machine which handles

amplitude with great many (MUCH more than 4) digits of precision is not.

> We don't have the technology to do this today, and may not ever,

> but it's still perfectly falsifiable as physicists use the term.

Do we have the technology at least to SAY what this decimal places in

amplitudes mean?

> question you're asking is whether quantum mechanics is still linear

> (at least to a good approximation) for massively entangled systems,

I would ask this question if I could understand what it meant to the

accuracy required. As it stands, I need first to understand the physical

meaning of the 500 digit long numbers. So,

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!! !!

!! Please describe a thought experiment which can probe your machine !!

!! to be in the state your describe with the accuracy you need. !!

!! You can use the resources of the entire Universe, but NOT MORE. !!

!! !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

> and I think if you asked a physicist this question,

> rather than whether they believe QM to the millionth decimal place,

> you'd find that most of them believe it.

Polling techniques make wonders. I heard in 1939 people were polled with

two questions: "Should we use all our might to defend our British allies

from Nazi's assault?" and "Should we allow the Europeans to pull as into

their war?" The results were vastly different ...

Leonid Levin.

Mar 12, 2000, 3:00:00â€¯AM3/12/00

to

Hello -

Peter Shor wrote:

<snip! of interesting discussion of QM and QC>

> The fact that Bose-Einstein condensates on billions of atoms work as

> predicted certainly seems to support it, but Bose-Einstein condenstates

> are still a long way from the regime you need for quantum computation.

I agree with everything Dr. Shor says, but the above is an interesting

statement. Does the fact that we can predict the behavior of

million-atom BE condensates really imply that we understand the

physics of enormous entangled many-body systems? To me, it implies

that (a) with excellent precision, these atoms really do act like

indistinguishable particles obeying BE statistics; and (b) we

understand the (_single-particle_) states of optical /

magneto-optic traps very well. Any atomic physicists out there

care to comment?

Best,

Doug Natelson

Mar 16, 2000, 3:00:00â€¯AM3/16/00

to

Daniel Gottesman <gott...@worldnet.att.net> wrote:

>John Baez wrote:

>> Has it been shown that these routines are stable under *arbitrary

>> small perturbations of the Hamiltonian*, including perturbations

>> that couple the Hamiltonian with the outside world? This is the

>> kind of robustness that real-world machines must have.

>>

>> (Perturbations coupling the computer with the outside world

>> will typically make pure states of the computer evolve to mixed

>> states. It's this kind of "environmentally induced decoherence"

>> that most quantum computation skeptics worry about.)

>This is one of the most amazing things about quantum error

>correction. It turns out that it is sufficient to correct

>just errors with the form of a Pauli matrix (i.e., bit flip,

>phase flip, and simultaneous bit and phase flip), plus the

>identity (no error), and you automatically correct the most

>general single-qubit error allowed by quantum mechanics!

That's cool. But I wasn't asking about "single-qubit errors".

I was asking about general perturbations of the Hamiltonian.

To operate reliably in the real world, any piece of machinery

needs the kind of robustness I mentioned above. That's why I'll

always be a skeptic about quantum computation until someone

gives a plausible argument that quantum computers with suitable

error-correction have this kind of robustness.

Let me flesh out what I was saying, to make it perfectly clear

what I'm looking for. Suppose we have a quantum computer with

Hilbert space K and Hamiltonian H. Let's assume that if we

start with a suitable state in K and evolve it via exp(-itH),

the computer does something we want.

Fine. But in real life, any apparatus is slightly coupled to

its environment in a not-completely-understood way. So in fact

the above picture is just an idealization of a more realistic

picture like this: we have the Hilbert space K for the computer,

but also a Hilbert space K' for the environment (and any aspects

of our computer that our simplified picture ignored). We have

the Hamiltonian H for our computer, which is an operator on K,

but also an interaction Hamiltonian, which is some arbitrary

operator H' on the combined Hilbert space K tensor K'.

Of course the job of the engineer is to make H' small, so let

us write it as epsilon H', and leave it to the engineer to make

epsilon small. The quantity epsilon measures the extent to which

the computer interacts with its environment, thanks to our inability

to completely protect it from such interaction.

So: in reality, our Hilbert space is K tensor K', and the

Hamiltonian of the computer coupled to the environment is

H_{actual} = H tensor 1 + epsilon H'

Now we start off the computer in a state of the form

(desired state of the computer) tensor (some state of the environment)

and evolve it via exp(-itH_{actual}). What happens? Well,

we *hope* that if epsilon is sufficiently small (but still

realistically large!), the computer still does what we want,

expect perhaps for a small chance of failure. So we have to

estimate the chance of failure as a function of epsilon.

Suppose it turns out that to keep the chance of failure less

than 5%, we need to make epsilon exponentially small as a

function of the size of the problem to be solved. Then even

if *naively* our computer solves difficult problems in polynomial

time, we can't claim it does so *in real life*, because one

would need to spend an exponential amount of effort preventing

the computer from interacting with its environment.

Note that we are not allowed to assume H' only induces "single

bit errors". It can be arbitrary in form, as long as it's not

too big.

This is the kind of issue I'd love to see the quantum computation

folks tackle. Until they do, I don't see how we can claim that

quantum computers will ever be practical.

Mar 18, 2000, 3:00:00â€¯AM3/18/00

to

Daniel Gottesman <gott...@worldnet.att.net> wrote:

>ba...@galaxy.ucr.edu (John Baez) wrote:

>> So: in reality, our Hilbert space is K tensor K', and the

>> Hamiltonian of the computer coupled to the environment is

>>

>> H_{actual} = H tensor 1 + epsilon H'

>>

>> Now we start off the computer in a state of the form

>>

>> (desired state of the computer) tensor (some state of the environment)

>>

>> and evolve it via exp(-itH_{actual}). What happens? Well,

>> we *hope* that if epsilon is sufficiently small (but still

>> realistically large!), the computer still does what we want,

>> expect perhaps for a small chance of failure. So we have to

>> estimate the chance of failure as a function of epsilon.

>> Note that we are not allowed to assume H' only induces "single

>> bit errors". It can be arbitrary in form, as long as it's not

>> too big.

>I can say with confidence that if H' is completely arbitrary,

>the situation is hopeless. There's simply too many things

>that can go wrong to be able to handle them all.

The point is that H' is arbitrary but *small*. There is no way,

in practice, that we can build a machine whose Hamiltonian is

precisely known. At best we can ensure that its actual Hamiltonian

H_{actual} differs from the Hamiltonian we want it to have, (H tensor 1),

by an operator of very small norm - or something like that. This is

just a way of saying that "nothing is perfect". This is why I want

to assume

H_{actual} - H tensor 1 = epsilon H'

where H' is arbitrary but small - e.g. its operator norm is less

than 1, say.

>This is not

>terribly different from the situation with classical computers,

>actually. If *anything* can happen with a 5% probability, there's

>simply nothing you can do. For instance, if a meteor hits your

>computer, error correction is not going to help much.

Right, but my setup was supposed to take this kind of thing into account.

Suppose for a fixed value of epsilon, some choice of H' causes, with

5% probability, a failure of the computer thanks to some drastic

interaction with the environment, like a meteor collision. Then

I would expect that by reducing epsilon, the chance of this sort

of failure would go down, about linearly with epsilon. (I imagine

doing a perturbation expansion of exp(-itH_{actual}) in powers of

epsilon.) That's okay with me. What's not okay with me is if we

need something like

epsilon ~ exp(-k [size of problem being solved])

to keep the probability of failure below a fixed number. If this

happened, I think it'd mean that we'd need to manufacture the quantum

computer with a precision that grew exponentially with the size of

the problem being solved.

>This is

>one reason I say correlated errors are trouble. However, I think

>it's safe to expect that events that drastic will be extremely

>rare, so much so that we can safely ignore them and make some more

>reasonable assumptions about the form of H'.

I'm worried about the fact (at least I think it's a fact) that H'

can have terms of any form whatsoever as long as they are sufficiently

small.

Anyway, thanks for filling me in on the state of the art. I'm

glad people are addressing this sort of issue!

Mar 19, 2000, 3:00:00â€¯AM3/19/00

to

In <FrFuA...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

> I think I've located the source of our philosophical differences.

> The question is: what is a quantum state? I say it is a vector in

> a high-dimensional Hilbert space (2^n dimensions for n qubits).

> You say that it is a long list of coordinates. What's the difference?

This is what I mean by QC folk sweeping too many things under the rug.

I am glad I enticed you there to talk dark philosophies in a dark place.

In cryptography n may be a thousand bits (and could easily be millions.)

By ~n I will mean a reasonable power of n. The number of roughly

different vectors in 2^{~n} dimensional space H is 2^{2^{~n}}. Take a

random v \in H. The minimal size of a machine which can recognize or

generate v (approximately) is 2^{~n} -- far larger than our Universe.

(From a cardinality argument: 2^{~K} machines of K atoms.)

So, only very special vectors v could be assigned any physical meaning.

You cannot take an equal-opportunity stance and reject preferred basis.

> In a list of 2^n coordinates, each one is going to be very small.

> There's a preferred basis---of whichever coordinates you choose to

> write down the vector. The question then becomes: does the universe

> treat quantum states as vectors or as lists of coordinates?

> If this question even makes sense,

> the only possible way to answer it is by experiment.

As you see from my argument, this question cannot have an experimental

answer. This is why I think you can say anything you want about your

machine without compelling the physicists to take any stance about it.

If it fails to factor, they would just shrug.

If it, OTOH, succeeds, they would be bewildered.

> > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

> > !! !!

> > !! Please describe a thought experiment which can probe your machine !!

> > !! to be in the state your describe with the accuracy you need. !!

> > !! You can use the resources of the entire Universe, but NOT MORE. !!

> > !! !!

> > !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

> You can't probe the machine to check whether it's in an arbitrary state.

> If I give you two computer programs, can you tell me whether they compute

> the same function? And if you can't, does that mean that writing computer

> programs is a hopeless venture?

No, because if they do not compute the same function there is an input

on which they can be shown to disagree.

> That's a great story. As it turned out, they didn't defend their British

> allies from the Nazi's assault (at least not Czechoslovakia, Poland or

> France). And they did get pulled into the war. Is there a lesson here?

Yes. They did succeed in defending the Brits from the Nazis.

Let me end with a quote from my private e-mail:

Peter, [...] I hope you realize that my arguing does not in any way

detract from my admiration of your result. I just think you are

forfeiting a greater possible set of implications in favor of a cheaper

one. Sort of like if Maxwell would be selling the Daemon of his famous

thought experiment as a path to cheaper electricity from heat.

Much of insights of todays thermodynamics would be lost or delayed.

In any case, interpreting a theorem of the form "A implies B"

[that feasibility of QC implies feasibility of factoring]

as an argument in favor of A is a peculiar logic.

Leonid Levin.

[Note to moderators: Peter's article I am replying to did not propagate to

all sites, e.g., to bu.edu. I noticed it only because I also have an MIT

account. Several articles in the last few weeks had this problem.]

[Moderator's note: there are a lot of problems like this now. I can't

really do anything about this, but apparently some usenet bigshots are

working on it. - jb]

Mar 22, 2000, 3:00:00â€¯AM3/22/00

to

In article <2000032000...@csb.bu.edu>, Leonid Levin <l...@cs.bu.edu> writes:

|> In <FrFuA...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

|>

|> > I think I've located the source of our philosophical differences.

|> > The question is: what is a quantum state? I say it is a vector in

|> > a high-dimensional Hilbert space (2^n dimensions for n qubits).

|> > You say that it is a long list of coordinates. What's the difference?

|>

|> This is what I mean by QC folk sweeping too many things under the rug.

|> I am glad I enticed you there to talk dark philosophies in a dark place.

|>

|> In cryptography n may be a thousand bits (and could easily be millions.)

|> By ~n I will mean a reasonable power of n. The number of roughly

|> different vectors in 2^{~n} dimensional space H is 2^{2^{~n}}. Take a

|> random v \in H. The minimal size of a machine which can recognize or

|> generate v (approximately) is 2^{~n} -- far larger than our Universe.

|> (From a cardinality argument: 2^{~K} machines of K atoms.)

|>

|> So, only very special vectors v could be assigned any physical meaning.

|> You cannot take an equal-opportunity stance and reject preferred basis.

|> In <FrFuA...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

|>

|> > I think I've located the source of our philosophical differences.

|> > The question is: what is a quantum state? I say it is a vector in

|> > a high-dimensional Hilbert space (2^n dimensions for n qubits).

|> > You say that it is a long list of coordinates. What's the difference?

|>

|> This is what I mean by QC folk sweeping too many things under the rug.

|> I am glad I enticed you there to talk dark philosophies in a dark place.

|>

|> In cryptography n may be a thousand bits (and could easily be millions.)

|> By ~n I will mean a reasonable power of n. The number of roughly

|> different vectors in 2^{~n} dimensional space H is 2^{2^{~n}}. Take a

|> random v \in H. The minimal size of a machine which can recognize or

|> generate v (approximately) is 2^{~n} -- far larger than our Universe.

|> (From a cardinality argument: 2^{~K} machines of K atoms.)

|>

|> So, only very special vectors v could be assigned any physical meaning.

|> You cannot take an equal-opportunity stance and reject preferred basis.

But this doesn't say anything about a preferred basis --- it says that

there are some preferred states. Quantum mechanical machines can quickly

reach some very entangled states which aren't succinctly expressible in

terms of coordinates, and quantum mechanics makes very concrete predictions

about these states. This is also not purely a quantum mechanical problem.

There are many 1000-digit numbers that will never be written down (even though,

as opposed to the quantum mechanical situation, they actually could be).

Are the rules of multiplication for these numbers different?

|> > In a list of 2^n coordinates, each one is going to be very small.

|> > There's a preferred basis---of whichever coordinates you choose to

|> > write down the vector. The question then becomes: does the universe

|> > treat quantum states as vectors or as lists of coordinates?

|> > If this question even makes sense,

|> > the only possible way to answer it is by experiment.

|>

|> As you see from my argument, this question cannot have an experimental

|> answer. This is why I think you can say anything you want about your

|> machine without compelling the physicists to take any stance about it.

|> If it fails to factor, they would just shrug.

|> If it, OTOH, succeeds, they would be bewildered.

Well ... only if they consider quantum mechanics to be talking about lists

of coordinates and not vectors. If there are non-linearities in quantum

mechanics which are detectable by watching quantum computers fail, physicists

will be VERY interested (I would expect a Nobel prize for conclusive evidence

of this). Of course, quantum computers could also fail for other reasons.

In some other posts in this thread, John Baez has pointed out a gap in our

theory of quantum fault-tolerant computation having to do with how fast the

multi-term pieces in an error Hamiltonian go to zero. I expect this will

eventually be resolved satisfactorily eventually, but if quantum computers fail

for this reason it won't be anywhere near as momentous.

Peter Shor

Mar 22, 2000, 3:00:00â€¯AM3/22/00

to

In <Fr0w9...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

> suppose that the universe obeys quantum mechanics ... everything

> works linearly, with superpositions, just the way that we think it

> does. This doesn't mean the standard model of physics is right ...

> it means that the basic framework of quantum mechanics on which

> physicists hang quantum field theory, string theory, etc. is correct.

> suppose that the universe obeys quantum mechanics ... everything

> works linearly, with superpositions, just the way that we think it

> does. This doesn't mean the standard model of physics is right ...

> it means that the basic framework of quantum mechanics on which

> physicists hang quantum field theory, string theory, etc. is correct.

But we know that even the basic laws fail if you take them to hundreds

of digits. This is quite besides the details such as standard model.

> barring very strange quantum-coherent errors, quantum fault tolerance

> will work as advertised, and, if we can build them to the required

> accuracy of 10^(-4) or so, quantum computers will be able to factor.

Rounded to 10^{-4} (if not to 10^{-10^4}),

all amplitudes in your algorithm would be 0.

> Now, suppose that quantum computers can be built with the requisite

> accuracy, but quantum computers still fail to work, even using

> fault-tolerant circuits. Look at the operation of a such a machine

> trying to factor a large integer using (say) a billion steps. After

> each step, let's compare the state of the real machine to the state

> of an ideal computer in a world with linear quantum mechanics doing

> the same computation. The machines start in nearly identical states,

They do not. Look what you are saying: "Here is a box. It has a

remarkable property: its many q-bits contain the text of the Bible with

the amplitude which is exactly .0[a million digit file follows]".

What could you possibly even mean by that? What physical interpretation

this statement could have even for just this one amplitude?

> and end in substantially different ones. Thus, since there are only a

> billion steps, there must be one step where the change in the real

> state of the machine deviates from what linear quantum mechanics

> would predict by one part in a billion, which is only 9 decimal

> places. Thus, for quantum computation to fail on this factorization,

> we need a nonlinearity which appears in only the 9'th decimal places.

This reminds me of a theory popular among 18 century revolutionaries in

Russia. They wanted to combine the materialist philosophy with Christian

belief in the resurrection of the dead.

They said: "The task to resurrect all who ever lived sounds impossible

even with the future advanced technology.

But think of it, it only takes each person to resurrect his parents."

> One objection to the above argument might be that we can't actually

> measure the state of our machine. However, we could theoretically

> run the computation many times and just look at the average state.

> This will work just as well for the above thought experiment,

> since if we assume we can run the machine exponentially many times,

> we can in theory measure the average state to any desired accuracy.

In principle, we can measure anything to any accuracy. However, for our

sins, we are confined to this puny Universe, which does not have enough

resources to measure anything exponential. And there are whispers that

in larger universes our laws run into contradictions and do not work.

Do we have a technology at least to SAY what this decimal places in

amplitudes mean?

> question you're asking is whether quantum mechanics is still linear

> (at least to a good approximation) for massively entangled systems,

I would ask this question if I could understand what it meant to the

accuracy required. As it stands, I need first to understand the physical

meaning of the 500 digit long numbers.

> and I think if you asked a physicist this question,

Mar 24, 2000, 3:00:00â€¯AM3/24/00

to

In article <8bau52$84c$1...@nnrp1.deja.com>, tu...@ar-tiste.com wrote:

> Gottesman said:

>

> >Hamiltonians with many-body terms are very rare; it's much more

> >likely the bothersome terms in H' will be two- or three-qubit

> >interactions.

>

> Is it unlikely that the same environment interact with

> 5 different randomly selected qubits at the same time?

I would expect so, yes. If those 5 qubits are all next to

each other, it becomes much more reasonable (but can be dealt

with by a code correcting "burst errors".

> Can this eventuality subvert your error correction schemes?

If it stops at 5, it can be dealt with in principle (though

dealing with 5 arbitrary errors would be kind of a pain).

The problem comes if the frequency of errors drops only

slowly with the number of qubits affected.

> Gottesman said:

>

> >We also need different parts of the environment not to interact

> >with each other for this scenario. The reason for this caveat

> >is that a large coherent environment could indirectly connect

> >up distant qubits. This is what happens with collective errors.

> >I have a great deal of trouble envisioning this as a serious

> >problem, however. Long-range coherence is what *we're* trying

> >to achieve, but it's hard because it's so delicate. It seems

> >like it should be straightforward to mess up the environment so

> >it can't conspire against us this way.

>

> Something that I don't understand and maybe

> you can clarify is this:

> Classical long range correlations are very common in nature,

> and they are sometimes very hard to shield against (e.g., gravity).

> The long range correlation in the environment that would

> produce these types of errors that you find hard to correct,

> does it need to be a purely quantum correlation,

> or would classical long range correlations in the environment

> suffice to subvert your error correction schemes? But

> if an interaction can produce classical correlations, then it surely

> also produces quantum correlations, since separating correlations

> into classical and quantum is artificial.

A lot of such classical correlations fall into two categories.

In one (such as a meteor hitting the computer), the error is

large but extremely unlikely. In another (such as gravity), the

error is always there, but is always small. This sort of error

is completely benign -- it does something small (I + epsilon E)

to every qubit. We can expand this in epsilon. The O(1) term

is I, the identity. The O(epsilon) terms all involve 1 error.

The O(epsilon^2) terms involve 2 errors, and so on -- the

O(epsilon^k) terms involve k errors. k-qubit errors are

exponentially more unlikely than one-qubit errors, which is

just we assume in the standard error model.

Obviously, there is a full spectrum of things in between

these two examples. The problems come from errors which

make a choice either to do nothing or to do something

large to every qubit. We know that classical computation

is possible (at least in practice, if not in principle :-) ),

and this kind of error is also a problem for classical

computers, so that puts bounds on how likely that sort

of error can be.

The distinction between quantum and classical is that in

a classical computer, we only have to worry about

probabilities, while in a quantum computer, we are concerned

about amplitudes. If an error appears with an amplitude

p, then a classical computer would only see it with

probability p^2, whereas a quantum computer would have to

worry about it after about 1/p operations. This requires,

however, that the environment maintain large-scale coherence,

so that the amplitudes from different times sum as amplitudes

rather than probabilities. Otherwise, the quantum computer

is in the same boat as classical computers.

A bad error would be a term in the Hamiltonian like

epsilon Z_1 \otimes Z_2 \otimes ... \otimes Z_n

(Z_i is a Pauli matrix acting on qubit i). For this to be

a bad *quantum* error rather than a bad classical error,

the phase of epsilon has be very steady. (Actually, it

needs to vary precisely to stay in synch with whatever

computation is going on, and that's even harder.)

Things like gravity actually produce terms that look like

epsilon \sum_i (c_i Z_i),

which you can see are one-qubit errors.

Daniel Gottesman

gott...@worldnet.att.net

Mar 24, 2000, 3:00:00â€¯AM3/24/00

to

Peter Shor wrote:

{

I think I've located the source of our philosophical differences.

The question is: what is a quantum state? I say it is a vector in a

high-dimensional Hilbert space (2^n dimensions for n qubits).

You say that it is a long list of coordinates. What's the difference?

}

{

I think I've located the source of our philosophical differences.

The question is: what is a quantum state? I say it is a vector in a

high-dimensional Hilbert space (2^n dimensions for n qubits).

You say that it is a long list of coordinates. What's the difference?

}

{

There's only hundreds of digits of accuracy needed if you are

computing using lists of coordinates ... not if you somehow compute

using the vectors,directly (something we can't do with our puny

computers, but who's to say that's not the way the universe works).

}

I don't follow this. A vector is specified by a basis and its

coordinates in that basis. In QC theory, there is no uncertainty

in the basis, so any uncertainty in a vector is the same as the

uncertainty in its coordinates.

P.Shor wrote:

{

Well ... only if they consider quantum mechanics to be talking

about lists of coordinates and not vectors. If there are non-

linearities in quantum mechanics which are detectable by

watching quantum computers fail, physicists will be VERY interested (I

would expect a Nobel prize for conclusive evidence of this). Of course,

quantum computers could also fail for other reasons.

}

If quantum computers fail, I would first think that

there is an unwanted interaction with the environment. It would take

many more, different kinds of experiments best explained by a non-

linearity before I would accept the theory that the failure of QCs is

due to a non-linearity in QM. If my car fails, my first hypothesis is

not that it is failing due to a breakdown of Newton's Laws.

Mar 24, 2000, 3:00:00â€¯AM3/24/00

to

In article <8bg29j$so7$1...@nnrp1.deja.com>, <tu...@ar-tiste.com> wrote:

> Peter Shor wrote:

> { I think I've located the source of our philosophical differences.

> The question is: what is a quantum state? I say it is a vector in a

> high-dimensional Hilbert space (2^n dimensions for n qubits).

> You say that it is a long list of coordinates. What's the difference?}

> { There's only hundreds of digits of accuracy needed if you are

> computing using lists of coordinates ... not if you somehow compute

> using the vectors,directly (something we can't do with our puny

> computers, but who's to say that's not the way the universe works). }

> Peter Shor wrote:

> { I think I've located the source of our philosophical differences.

> The question is: what is a quantum state? I say it is a vector in a

> high-dimensional Hilbert space (2^n dimensions for n qubits).

> You say that it is a long list of coordinates. What's the difference?}

> { There's only hundreds of digits of accuracy needed if you are

> computing using lists of coordinates ... not if you somehow compute

> using the vectors,directly (something we can't do with our puny

> computers, but who's to say that's not the way the universe works). }

> I don't follow this. A vector is specified by a basis and its

> coordinates in that basis. In QC theory, there is no uncertainty

> in the basis, so any uncertainty in a vector is the same as the

> uncertainty in its coordinates.

Peter Shor is having a topological disagreement with me: how to measure

precision. Hilbert Space has many topologies: weak, strong, etc.

In finite dimensions these distinctions cannot be used.

Peter measures precision by the Hilbert diameter of the neighborhood.

I see the neighborhood as too thin if our Universe does not have enough

resources to handle it. "Handle" means to generate a vector in it or to

distinguish the ball's center from its outside.

Mar 24, 2000, 3:00:00â€¯AM3/24/00

to

Peter Shor wrote:

{

I think I've located the source of our philosophical differences.

The question is: what is a quantum state? I say it is a vector in a

high-dimensional Hilbert space (2^n dimensions for n qubits).

You say that it is a long list of coordinates. What's the difference?

}

{

I think I've located the source of our philosophical differences.

The question is: what is a quantum state? I say it is a vector in a

high-dimensional Hilbert space (2^n dimensions for n qubits).

You say that it is a long list of coordinates. What's the difference?

}

{

There's only hundreds of digits of accuracy needed if you are

computing using lists of coordinates ... not if you somehow compute

using the vectors,directly (something we can't do with our puny

computers, but who's to say

that's not the way the universe works).

}

I don't follow this. A vector is specified by a basis and its

coordinates in that basis. In QC theory, there is no uncertainty

in the basis, so any uncertainty in a vector is the same as the

uncertainty in its coordinates.

P.Shor wrote:

{

Well ... only if they consider quantum mechanics to be talking

about lists of coordinates and not vectors. If there are non-

linearities in quantum mechanics which are detectable by

watching quantum computers fail, physicists will be VERY interested (I

would expect a Nobel prize for conclusive evidence of this). Of course,

quantum computers could also fail for other reasons.

}

If quantum computers fail, I would first think that

there is an unwanted interaction with the environment. It would take

many more, different kinds of experiments explainable by a non-

Mar 27, 2000, 3:00:00â€¯AM3/27/00

to

l...@csb.bu.edu (Levin) wrote:

>> barring very strange quantum-coherent errors, quantum fault tolerance

>> will work as advertised, and, if we can build them to the required

>> accuracy of 10^(-4) or so, quantum computers will be able to factor.

>

> Rounded to 10^{-4} (if not to 10^{-10^4}),

> all amplitudes in your algorithm would be 0.

But these "amplitudes" don't have physical significance. It doesn't

matter that they are practically 0, because quantum computers don't

make use of them.

Put another way, the evolution of the state vector isn't affected by

the amplitudes in any particular basis. So why should it affect the

physics that they are essentially zero?

The QC folks are on to a winner either way, at least as long as they

can sort out some of the fiddly details. If it works, they get an

extremely useful technological tool. If it doesn't, they get some

brand new physics!

Harry.

---

Harry Johnston, om...@ihug.co.nz

One Earth, One World, One People

Mar 27, 2000, 3:00:00â€¯AM3/27/00

to

Leonid Levin wrote:

> I see the neighborhood as too thin if our Universe does not

> have enough resources to handle it. "Handle" means to generate

> I see the neighborhood as too thin if our Universe does not

> have enough resources to handle it. "Handle" means to generate

> a vector in it or to distinguish the ball's center from its outside.

Is it the general idea of quantum computers to address/control

10^100 spatial elements per cubic centimeter as quantum memory ? :

http://www.deja.com/=dnc/[ST_rn=ap]/getdoc.xp?AN=120225836

Mar 29, 2000, 3:00:00â€¯AM3/29/00

to

To get to the substance, I will try to clarify our

terminological difference with Peter Shor about precision.

terminological difference with Peter Shor about precision.

In <FrFuA...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

> I think I've located the source of our philosophical differences.

> The question is: what is a quantum state? I say it is a vector in a

> high-dimensional Hilbert space (2^n dimensions for n qubits). You say

> that it is a long list of coordinates. [...] There's only hundreds of

> digits of accuracy needed if you are computing using lists of

> coordinates ... not if you somehow compute using the vectors directly

We are talking about two subtly different things: metric and topology.

Metric tells how close is our ideal point to a specific wrong one.

Topology tells how close it is to the combination of all unacceptable

(non-neighboring) points. This may differ from the distance to the

closest unacceptable point, especially if QM is involved.

In infinite dimensions the distinction between positive separation and 0

varies with topologies. In finite dimensions 0-vs.-positive distinction

is too coarse: all topologies agree. Since only math nerds would call

2^{500} finite, we need some quantitative refinement, some sort of a

weak-topological (not metric) DEPTH of a neighborhood.

[ Do physicists have such things? Any word from the experts here? ]

If there could be machines within our Universe that can handle this

depth, I would consider it physical. By "handle" I mean feats like

generating points inside the neighborhood or distinguishing its center

from the outside. Otherwise, I would call the neighborhood too thin.

In <2000032000...@csb.bu.edu>, I <L...@cs.bu.edu> wrote:

> In cryptography n may be a thousand bits (and could easily be millions.)

> By ~n I will mean a reasonable power of n. The number of roughly

> different vectors in 2^{~n} dimensional space H is 2^{2^{~n}}. Take a

> random v in H. The minimal size of a machine which can recognize or

> generate v (approximately) is 2^{~n} -- far larger than our Universe.

> (From a cardinality argument: 2^{~K} machines of K atoms.)

I can imagine a feasible way to separate TWO random

states of Quantum Computer (QC) FROM EACH OTHER.

However, as this calculation shows, no machine can separate a random QC

state from the set of states more distant from it than QC tolerates.

(One could argue, the precision issue does not even arise because these

huge states cannot be pin-pointed and make no individual physical sense.

However, I would rather avoid here this additional controversy.)

In <FrsHL...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

> If there are non-linearities in quantum mechanics which are detectable

> by watching quantum computers fail, physicists will be VERY interested

> (I would expect a Nobel prize for conclusive evidence of this).

With few q-bits, QC will be eventually made to work. But the progress

will stop long before QC factoring starts competing with ordinary PCs.

The QC people will then demand some noble prize for the correction to

the Quantum Mechanics. But the committee will want more specifics than

simply a non-working machine, so something like observing the state of

the QC would be needed. Then QC folks may find the Universe too small

for observing individual states of the needed dimensions and accuracy.

(If they raise sufficient funds to compete with paper-and-pencil

factoring, they may get Nobel Prize in Economics :-)

This is why I disagree with Peter Shor that QC parameters are

sensible and stay within the realms of physics. -Leonid Levin.

Mar 31, 2000, 3:00:00â€¯AM3/31/00

to

Leonid Levin wrote:

>

> To get to the substance, I will try to clarify our

> terminological difference with Peter Shor about precision.

>

> In <FrFuA...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

> > I think I've located the source of our philosophical differences.

> > The question is: what is a quantum state? I say it is a vector in a

> > high-dimensional Hilbert space (2^n dimensions for n qubits). You say

> > that it is a long list of coordinates. [...] There's only hundreds of

> > digits of accuracy needed if you are computing using lists of

> > coordinates ... not if you somehow compute using the vectors directly

>

..>

> To get to the substance, I will try to clarify our

> terminological difference with Peter Shor about precision.

>

> In <FrFuA...@research.att.com>, Peter Shor <sh...@research.att.com> wrote:

> > I think I've located the source of our philosophical differences.

> > The question is: what is a quantum state? I say it is a vector in a

> > high-dimensional Hilbert space (2^n dimensions for n qubits). You say

> > that it is a long list of coordinates. [...] There's only hundreds of

> > digits of accuracy needed if you are computing using lists of

> > coordinates ... not if you somehow compute using the vectors directly

>

> is too coarse: all topologies agree. Since only math nerds would call

> 2^{500} finite, we need some quantitative refinement, some sort of a

> weak-topological (not metric) DEPTH of a neighborhood.

>

> [ Do physicists have such things? Any word from the experts here? ]

I thought that was what stacks of quantum tetrahedra and quantum

octahedra where supposed to do.

Any experts out there to confirm or refute this?

>

> If there could be machines within our Universe that can handle this

> depth, I would consider it physical. By "handle" I mean feats like

> generating points inside the neighborhood or distinguishing its center

> from the outside. Otherwise, I would call the neighborhood too thin.

>

Hey, also, on a somewhat related issue, isn't number theory and

abstraction an OUTCOME of naturally occurring quantum gravitational

computation? How can _those symbol sets_ then be fruitfully employed

to give a functional model of itself?

Do you follow? Looks like a most robust symbol set is required right at

the start.

--

Best regards,

Ralph E. Frost

Frost Low Energy Physics

http://www.dcwi.com/~refrost/index.htm ..Feeling is believing..

Mar 31, 2000, 3:00:00â€¯AM3/31/00

to

(Peter Shor wrote:)

>My algorithm is perfectly falsifiable ... if you build quantum gates accurate

>to 10^(-4), and string them together to make a fault-tolerant circuit for

>factoring a (say) 100-digit number, and if the computer doesn't factor it,

then

>it's falsified. We don't have the technology to do this today, and may

>not ever, but it's still perfectly falsifiable as phsyicists use the term.

>

Peter Shor's algorithm will survive in modified form though the current scheme

of entanglement may not be possible somehow, because it is mathematically so

beautiful.

>My algorithm is perfectly falsifiable ... if you build quantum gates accurate

>to 10^(-4), and string them together to make a fault-tolerant circuit for

>factoring a (say) 100-digit number, and if the computer doesn't factor it,

then

>not ever, but it's still perfectly falsifiable as phsyicists use the term.

>

Peter Shor's algorithm will survive in modified form though the current scheme

of entanglement may not be possible somehow, because it is mathematically so

beautiful.

I recomend highly the article by Joel Birnbaum and R. Stanley Williams in

Physics Today, Jan. 2000 to the audience of this thread. When I read the

first half of the article I thought "Was it God who wrote these lines", though

the second half is just the commercial blurb of Teramac.

Euclid38

Apr 4, 2000, 3:00:00â€¯AM4/4/00

to

Harry Johnston <om...@ihug.co.nz> wrote:

> The QC folks are on to a winner either way, at least as long as they

> can sort out some of the fiddly details. ...

That is not only true for QC, but for any other can of worms :-)

> ... If it works, they get an

> extremely useful technological tool. If it doesn't, they get some

> brand new physics!

If it doesn't work because of entanglement with the environment,

then I don't see the new physics (because that is exactly what many

physicists predict).

-- Jos

Apr 4, 2000, 3:00:00â€¯AM4/4/00

to

At 16 Mar John Baez <ba...@galaxy.ucr.edu> wrote:

>Daniel Gottesman <gott...@worldnet.att.net> wrote:

>

[...]

>

>>This is one of the most amazing things about quantum error

>>correction. It turns out that it is sufficient to correct

>>just errors with the form of a Pauli matrix (i.e., bit flip,

>>phase flip, and simultaneous bit and phase flip), plus the

>>identity (no error), and you automatically correct the most

>>general single-qubit error allowed by quantum mechanics!

>

>That's cool. But I wasn't asking about "single-qubit errors".

>I was asking about general perturbations of the Hamiltonian.

>To operate reliably in the real world, any piece of machinery

>needs the kind of robustness I mentioned above. That's why I'll

>always be a skeptic about quantum computation until someone

>gives a plausible argument that quantum computers with suitable

>error-correction have this kind of robustness.

If I understand John's idea properly, the explanation maybe:

If some error correction code copes with k errors mentioned by Gottesman, then

from direct consideration of expressions with the errors due to entanglement of

arbitrary k qubits with environmental Hamiltonian the errors will be also

corrected.

It does not depend, if they big or not -- the errors just "switched" from

our qu-word to auxiliary ancilla qubits and after it maybe frown in trash

like duster after cleaning your computer screen -- all errors come from

computer to the duster.

Alexander Vlasov

Real E-Mail is: MyFirstName...@pobox.spbu.ru

0 new messages