Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Quantum Computers.

12 views
Skip to first unread message

Nicolaas Vroom

unread,
Dec 2, 2002, 5:07:16 PM12/2/02
to physics_research

At page 51 of Scientific November 2002 is written:
"How many computational steps are needed to find the
prime factors of a 300-digit number?
The best classical algorithm would take about 5*10E24 steps.
By taking advantage of innumerable quantum states
a quantum factoring algorithm would take only 5*10E10 steps"

My question is why are both not the same.
How do we explain that a quantum computer requires
such a relative small number of steps.

Suppose this the best classical algorithm:

1 INPUT 300dn (300-digit number)
2 N1 = 1
3 DO
4 N1 = N1 + 1
5 N2 = 300dn/N1
6 Is N1 * Int(N2) = 300dn
7 Yes Is N2 prime ?
8 Yes Print "Eureka" N1, N2 : Exit
9 No Exit
10 Loop Until N1 > SQR(300dn)
11 Print 300dn "is prime number !"

Line 7:
This test is a small program (inner loop) of rougly 6 lines
which is almost identical as the above program.

Line 10 Can also be written as:
10 Loop Until N1 * N1 > 300dn
This involves a multiplication of two 150-digit numbers

Line 10 is important because it answers the question:
How many times do we have to excute the outer loop:
Answer: 10E150 times

Roughly speaking each loop consists of 10 steps.
That means the total number of steps is 10E151.

The first improvement is to replace line 4 by:
4 N1 = N1 + 2 i.e. test only odd numbers.
The problem is both that the classical computer and
the quantum computer will benefit from this improvement
which does not explain the difference between the two.

One possibility is to perform many loops in parallel,
but that means more hardware.
On the otherhand this is not a true solution because it
can be done for both a classical computer and a quantum computer.

The article in Scientific American gives the answer:
"By taking advantage of innumerable quantum states"
But how do you do that in practice in order to reduce the number
of steps (outer loops ?) of the quantum computer?

Nick

[Moderator's note: "Shor's algorithm" is the answer to your
question. More generally, quantum computers can be more efficient
than classical computers because a superposition of states can
"perform many loops in parallel" without requiring more hardware
(roughly speaking).

-- KS]

Charles DH Williams

unread,
Dec 2, 2002, 6:54:30 PM12/2/02
to
In article <3DE7DD7B...@pandora.be>, Nicolaas Vroom

<nicolaa...@pandora.be> wrote:
>
> Suppose this the best classical algorithm:
>

It isn't.

Anyway, practical quantum copmputers can now factor numbers up to 15 in,
err, milliseconds. I guess the next milestone will be when one runs
Microsoft Office.

Charles

Aaron Bergman

unread,
Dec 2, 2002, 7:59:54 PM12/2/02
to
In article <3DE7DD7B...@pandora.be>, Nicolaas Vroom wrote:

> [Moderator's note: "Shor's algorithm" is the answer to your
> question. More generally, quantum computers can be more efficient
> than classical computers because a superposition of states can
> "perform many loops in parallel" without requiring more hardware

> (roughly speaking). - KS]

Willfully disregarding the disclaimer, is there really any sense
in which this is true? It's oft-stated, but from what I know of
Shor's algorithm (and what I know is fairly minimal), it works
because quantum computers can do FFTs very quickly. If one were
really able to do as many loops in parallel as one would want,
then factoring would only take O(1) speed rather than
O(log^3(n)), right?

Aaron

Kevin A. Scaldeferri

unread,
Dec 3, 2002, 2:32:04 PM12/3/02
to
In article <slrnaunn6g....@abergman.student.princeton.edu>,

Well, I'm not an expert in this area by a long shot but there is are
at least two issues here.

First, when you rely on a superposition to "parallelize" a
computation, you need to somehow amplify the correct answer so that
there is high probability that when you collapse the wave function you
end up with the right answer. This is different from a classical
non-deterministic computation in which you can scan through all the
choices that were made to look for a successful execution.
(Fortunately for the QC, factorization can be verified in polynomial
time, so you can just check the answer and repeat the calculation if
you get unlucky.)

Second, I wouldn't expect to be able to get down to constant time,
just based on classical computation theory. Parallelization doesn't
generically allow you to do this. Also, while non-deterministic
machines can be faster than deterministic ones (or, at least it seems
that way), they can't reduce any problem to a constant-time solution.

I'd have to go brush up on Shor's algorithm myself to say any more
about how accurate this sort of statement really is. (Peter Shor does
post here periodically, though... maybe he'll see this and provide a
more rigorously correct explanation and save us the work :-) )

--
======================================================================
Kevin Scaldeferri Calif. Institute of Technology
The INTJ's Prayer:
Lord keep me open to others' ideas, WRONG though they may be.

Greg Kuperberg

unread,
Dec 4, 2002, 1:10:37 AM12/4/02
to

In my opinion "parallel computation" is poor terminology for quantum
computation. First off it's not clear what it means to allow arbitrary
parallel computation. To use Unix terminology, let's say that it means
that you can fork processes for free, and that all of the child processes
run in parallel but do not communicate. Then the next question is
how to reconcile the exponentially many final states of the child processes.
One model is the NP model: Children run for polynomial time. If all
children say "no", then the parent program outputs "no". If at least
one child process says "yes", the parent outputs "yes". Another model
is #P: children run for polynomial time and do not communicate; the
parent outputs the number of child processes that report "yes".

In the NP model, you have to convert everything to yes-no questions.
For instance you can ask whether the number n has a factor in some range
1,...,k. This can be computed in non-deterministic polynomial time,
because each child process can simply guess a different factor in 1,...,k
and test divisibility. But it is not O(1), because the processes
time to fork and more time to test divisibility.

A reasonable conjecture is that the quantum computation class BQP, if
restricted to yes-no questions, neither contains nor is contained in NP.

My preferred way to describe quantum computation is by analogy with
randomized classical computation. If a computer executes a randomized
algorithm, then in a manner of speaking its state is a classical
superposition. For some purposes this does afford some computational
speedup. In quantum computation you replace this classical superposition
by a quantum superposition. It should be believable that this yields
yet more speedup than classical superposition does.

In the most general formalism of this sort, the state of the computer
is a density matrix, capturing both classical and quantum superposition.
This is in my opinion the best description of quantum computation
and quantum probability theory generally.
--
/\ Greg Kuperberg (UC Davis)
/ \
\ / Visit the Math ArXiv Front at http://front.math.ucdavis.edu/
\/ * All the math that's fit to e-print *

Robert Tucci

unread,
Dec 4, 2002, 1:11:11 AM12/4/02
to
Nicolaas Vroom <nicolaa...@pandora.be> wrote in message
news:<3DE7DD7B...@pandora.be>...

> My question is why are both not the same. How do we explain that a
> quantum computer requires such a relative small number of steps.

Basically, when one counts the
number of elementary operations (EOs)
performed by a classical computer, one
is counting scalar multiplications.
With quantum computers, the EOs are *matrix*
multiplications. The matrices
being multiplied are not arbitrary---
they must act on one or two qubits only, but each
still represents many classical scalar multiplications.
Sig:-------------------------
A Petition to the American Physical Society for the Creation of a New
Acronym (R.I.N.N.O.)
http://www.ar-tiste.com/qcomp_onion/rinno.htm

Nicolaas Vroom

unread,
Dec 4, 2002, 1:11:55 PM12/4/02
to physics_research

Greg Kuperberg wrote:

> My preferred way to describe quantum computation is by analogy with
> randomized classical computation.

What do you mean with randomized classical computation?

> If a computer executes a randomized
> algorithm, then in a manner of speaking its state is a classical
> superposition.

What is a randomized algorithm?
Is that an algorithm to calculate random numbers?
or is it a Monte Carlo Simulation which uses random numbers?
What do you mean with classical superposition?

> For some purposes this does afford some computational speedup.

?

> In quantum computation you replace this classical superposition
> by a quantum superposition.

Sorry also not clear.

> It should be believable that this yields
> yet more speedup than classical superposition does.

Hum.

> In the most general formalism of this sort, the state of the computer
> is a density matrix, capturing both classical and quantum superposition.
> This is in my opinion the best description of quantum computation
> and quantum probability theory generally.

Is this really the best description possible ?

Nick
http://users.pandora.be/nicvroom/

Nicolaas Vroom

unread,
Dec 4, 2002, 1:12:13 PM12/4/02
to physics_research

Aaron Bergman wrote:

I did a search with:
http://www.google.com/search?q=Shor%27s+Algorithm&hl=en&lr=&ie=UTF-8&start=0&sa=N

In my first reply to this posting I already discussed some url's
which are mentioned.

Unfortunate it is still not clear why you need Shor's Algorithm
to factoring a number.
For example 15 = 3 * 5.
And more specific why this algorithm works faster on a quantum computer
compared with a classical computer.
For example how do you implement parallization on a quantum computer ?

The following program can be implemented using parallel logic:

For i = 1 to 10000 step 3
Call func1(i, result1)
Call func2(i+1, result2)
Call func3(i+2, result3)
If result1 = 1 or result2 = 1 or result3 = 1 Print "Eureka" : Exit
next i

Func1, Func2 and Func3 are the same functions.
You can only excute those functions in parallel if they are
completely indepent of each other.
For a classical computer parallel computation requires more hardware
i.e. more processors.
(I expect that for quantum computers this is the same.)

IMO parallization only is not the true reason why quantum computers
are faster.
The reason must be that they use "superposition of states"
(or a combination of both).
But how that works (and what the limits are) is not clear to me.

May be this url gives the answer:
http://www.ar-tiste.com/reservoir_noise.html

Nick.

Nicolaas Vroom

unread,
Dec 4, 2002, 9:45:11 PM12/4/02
to physics_research

In order to improve my knowledge about Shor's algorithm I did this
search:
http://www.google.com/search?q=Shor%27s+Algorithm&hl=en&lr=&ie=UTF-8&start=0&sa=N

Following are some results:
1. Quantum Computing, Shor's Algorithm, and Parallelism
Arun, Bhalla, Kenneth Eguro, Matthew Hayward
http://alumni.imsa.edu/~matth/quant/433/shor-par/

Shor's algorithm is discussed at:
http://alumni.imsa.edu/~matth/quant/433/shor-par/node12.html
The most details are at:
http://alumni.imsa.edu/~matth/quant/433/shor-par/node15.html
This page claims that in order "for factoring a given integer n"
you need (see step 1-4) a classical computer.
I found that very strange.
Starting with step 5 you need a quantum computer
In step s discrete Fourier transform is computed.
Why do you need that in order "for factoring a given integer n" ?

http://alumni.imsa.edu/~matth/quant/433/shor-par/node16.html
claims:
"In general Shor's algorithm simulation seems a good candidate
for parallelization."
Why the word simulation ?
Why the word seems ?
Does this imply that we are not sure ?

2. The following url also explains Shor's Algorithm:
http://tph.tuwien.ac.at/~oemer/doc/quprog/node18.html

3. And also this one does.
http://www.wikipedia.org/wiki/Shor's_algorithm

4. IMO the best one is:
http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf
This document explains in detail:
Example factor n = 55

After studying the above documents (I'am still in that
process) I have the following remarks.
1. Shor's Algorithm allows you (As far as I currently
understand) to perform factoring of an integer.
2. In order to do that you need both a classical computer
and a quantum computer.
3. It is not clear to me if you can do this whole calculation
solely on a classical computer.
4. It is not clear to me what the advantages are by using
a quantum computer.
5. Assuming remark 1 is correct and assuming that using a classical
computer and a quantum computer both together, is the best way to
implement shor's algorithm than I'am still not convinced
that quantum computers in general are faster
(and eventualy will replace classical computers)
6. Shor's Algorithm smells like a Monte Carlo simulation.
As part of the algorithm you have to observe register 2
and to measure register 1 (See url in remark 4)
It is not clear to me if you only have to do this once.
7. Assuming that you have to do this more often
what is the chance that you never find the right answer
specific with large numbers ?
8. It is interesting to study what happens if the number
that you want to factor (using Shor's algorithm)
is a prime number.

Nick

Nicolaas Vroom

unread,
Dec 5, 2002, 1:46:17 PM12/5/02
to physics_research


Charles DH Williams wrote:
> In article <3DE7DD7B...@pandora.be>, Nicolaas Vroom
> <nicolaa...@pandora.be> wrote:
> >
> > Suppose this the best classical algorithm:
> >
>
> It isn't.

I agree with you.
However that is not the issue.
The problem is when you start with a given algorithm
with consists of a certain number loops
which requires a certain number of steps on a classical computer
how can you do that SAME problem in much less steps
on a quantum computer.

Both computers considered should be the "same"
(i.e. both should be standard from the shelf
or both can be special purpose)
otherwise any comparison is not scientific.

Hans Aschauer

unread,
Dec 5, 2002, 1:48:16 PM12/5/02
to sci-physic...@moderators.isc.org

Aaron Bergman wrote:

> In article <3DE7DD7B...@pandora.be>, Nicolaas Vroom wrote:
>
>> [Moderator's note: "Shor's algorithm" is the answer to your
>> question. More generally, quantum computers can be more efficient
>> than classical computers because a superposition of states can
>> "perform many loops in parallel" without requiring more hardware
>> (roughly speaking). - KS]
>
> Willfully disregarding the disclaimer, is there really any sense
> in which this is true? It's oft-stated, but from what I know of
> Shor's algorithm (and what I know is fairly minimal), it works
> because quantum computers can do FFTs very quickly.

There is a sense in which this quantum parallelism is true. The following
example is one part of Shor's algorithm.

Suppose you have a function f:{1,2,3,...,N} -> {1,2,3,...,N} for some
sufficiently large N. The only thing which you know about f is that it is
periodic, i.e. (roughly speaking) there exist a nuber m so that f(i) =
f(i+m) for all i. If you only have a classical computer at your disposal,
how often will you have to evaluate f in order to learn m? Right, ~N times.

Using a quantum computer, you can evaluate "all values of f at the same
time", by feeding a superposition of all inputs into the function
evaluation. However, there is no way to get all results out of the QC, but
in this specific case, the only thing you are interested in is the
periodicity of the function, while all individual function values are
irrelevant. And this is precisely what the QC allows you to do: you can
read out this global property of the function, without getting any
knowledge about individual function values.

Hope that helps,

Hans

Nicolaas Vroom

unread,
Dec 5, 2002, 2:05:19 PM12/5/02
to physics_research
[Moderator's note: this discussion is wavering on the edge of
appropriateness for SPR. I urge people to try to stick to issues of
how quantum computers work & how they are different from classical
system, and relegate general discussion of the theory of computability
to other forums. -- KS]


Robert Tucci wrote:
>
> Nicolaas Vroom <nicolaa...@pandora.be> wrote in message
> news:<3DE7DD7B...@pandora.be>...
>
> > My question is why are both not the same. How do we explain that a
> > quantum computer requires such a relative small number of steps.
>
> Basically, when one counts the
> number of elementary operations (EOs)
> performed by a classical computer, one
> is counting scalar multiplications.

What is your definition of scalar multiplications ?

> With quantum computers, the EOs are *matrix* multiplications.

What is your definition of matrix multiplications ?

> The matrices
> being multiplied are not arbitrary---
> they must act on one or two qubits only, but each
> still represents many classical scalar multiplications.

Suppose you have to perform the following program:
1 for i = 2 to 100000
2 for j = 2 to i-1
3 A(j) = A(j) * B(j) / C(j)
4 A(j) = A(j-1) + A(j) + A(j+1)
5 next j
6 next i

Line 3 shows an example of matrix manipulation.
The inner loop will benefit from parallelization.
This is true for a classical computer and
(I expect) for a quantum computer.
For a classical computer this requires
more hardware (multi processors) and more memmory.
For a quantum computer I do not know what the
consequences are.
I expect certain limits are involved.

Line 4 shows an example of matrix manipulation.
However this one will NOT benefit from parallelization
This is true for a classical computer.
I expect the same for quantum computer.

> Sig:-------------------------
> A Petition to the American Physical Society for the Creation of a New
> Acronym (R.I.N.N.O.)
> http://www.ar-tiste.com/qcomp_onion/rinno.htm

Nick
http://users.pandora.be/nicvroom/

Greg Kuperberg

unread,
Dec 5, 2002, 2:27:27 PM12/5/02
to sci-physic...@moderators.isc.org

[Moderator's note: As I noted in another moderator's note, discussion
in this thread needs to focus on physics aspects of quantum
computation or move to another forum. To a limited extent, posts like
this that provide definitions are okay, but, in general, theory of
computation is off-topic in SPR. -- KS]

In article <aslghb$orv$1...@panther.uwo.ca>,


Nicolaas Vroom <nicolaa...@pandora.be> wrote:
>> If a computer executes a randomized
>> algorithm, then in a manner of speaking its state is a classical
>> superposition.
>What is a randomized algorithm?

...


>or is it a Monte Carlo Simulation which uses random numbers?

A Monte Carlo simulation is an example of a randomized algorithm.
In general a randomized algorithm is one that uses a source of random
numbers for some computational purpose. For instance the Miller-Rabin
primality test gives statistical evidence as to whether or not a number
n is prime by choosing a random residue from 0 to n-1.

>What do you mean with classical superposition?

In the sense that if you flip a coin, the coin is in a superposition of
its heads and tails states until you observe it. Although this is stated
using quantum jargon, I am only describing classical probability theory
here. In the context of quantum mechanics, a classical superposition is
often called a "mixture" and is properly described by a density operator.

In fact there is no strict dichotomy between mixture and quantum
superposition, or between classical correlation and quantum entanglement.
In both cases the latter is propery understood as a purer form of the
former. For instance if you dilute quantum entanglement with noise,
it degenerates to classical correlation.

>> For some purposes this does afford some computational speedup.
>?

One example is primality tests. The drawback of Miller-Rabin is that it
never provides a guarantee that n is prime, only a likelihood. On the
bright side, it is faster than any deterministic primality algorithm.

Other algorithms are in the ZPP class. This means that the output is
guaranteed to be correct, but the algorithm only probably runs quickly.
Here too there are primality algorithms that use a random number generator
and that usually run faster than the fastest deterministic algorithm.

Kevin A. Scaldeferri

unread,
Dec 4, 2002, 1:49:18 PM12/4/02
to
In article <aslghb$orv$1...@panther.uwo.ca>,
Nicolaas Vroom <nicolaa...@pandora.be> wrote:

>Greg Kuperberg wrote:

>> My preferred way to describe quantum computation is by analogy with
>> randomized classical computation.

>What do you mean with randomized classical computation?

>> If a computer executes a randomized
>> algorithm, then in a manner of speaking its state is a classical
>> superposition.

>What is a randomized algorithm?
>Is that an algorithm to calculate random numbers?
>or is it a Monte Carlo Simulation which uses random numbers?

More like the latter. It is an algorithm in which you can have steps
that say "generate a random number". Usually this is good for getting
an answer which has a high probability of being right. Hopefully you
can verify an answer efficiently, so at worst you just have to run a
couple times to get a correct answer.

An example that you use all the time, but probably don't realize, is
authentication for secure communications. If I want to authenicate
you (i.e. convince myself you really are who you claim), I generate a
random number, encrypt it with your public key, and send it to you to
decrypt and return to me. If you return the number I started with
then there is a high probability that you are who you claim to be.

Anyway, when you allow randomization, the complexity classes change.

In some vague way you can imagine that quantum computation is similar
to randomized classical computation, although as Greg states:

>> In quantum computation you replace this classical superposition
>> by a quantum superposition.

So the complexity classes for quantum computations should be different
than randomized classical computations.

In another vague sense, you can imagine that quantum computation is
also similar to non-deterministic classical computation. I couldn't
readily give you a description of the rigorous difference between the
two, analogous to the above statement about superposition, though.

Chris Pollett

unread,
Dec 5, 2002, 9:37:59 PM12/5/02
to
Hi,

If figured I'd put in my two cents of intuition.

> In my opinion "parallel computation" is poor terminology for quantum
> computation. First off it's not clear what it means to allow arbitrary
> parallel computation. To use Unix terminology, let's say that it means
> that you can fork processes for free, and that all of the child processes
> run in parallel but do not communicate. Then the next question is
> how to reconcile the exponentially many final states of the child
processes.
> One model is the NP model: Children run for polynomial time. If all
> children say "no", then the parent program outputs "no". If at least
> one child process says "yes", the parent outputs "yes". Another model
> is #P: children run for polynomial time and do not communicate; the
> parent outputs the number of child processes that report "yes".
>

BQP can be viewed as being contained in a parallel computation
class as it is contained in PSPACE and the latter class can be
characterized as what you can do in polynomial time if you had
access to polynomial in the input many processors. I think the
lowest known class classical class containing BQP is AWPP which
is a low class in the counting hierarchy which is due to Fortnow
and Rogers. The counting hierarchy is basically what you get when
you allow yourself the ability to compute exponential sums and
products of the results of PTIME computations. That the counting
hierarchy is contained in PSPACE can be seen by realizing that to
compute say an exponential sum of PTIME computations, we need to
keep track of the sum computed so far and then the next term in
the sum to be computed, both of which are polynomial size.

Here is my intuition as to why polynomial time quantum
computations can be simulated using functions from the counting
hierarchy and what makes it hard to simulate a quantum
computation in a smaller class like PTIME. To simulate a quantum
computation (I am simplifying here a bit) the things we need to
be able to compute classically are expressions like: <vec{y}| U |
vec{x}> Here x = x1,...,xn in this setup is some number in binary
and |vec{x}> is the column vector we get by kronecker producting
n vectors of the form [1,0]^T or [0,1]^T together depending on
whether xi was 0 or 1. So will be a column vector of length 2^n
with all entries 0 except for 1 one. <vec{y}| is arrived at in
similar fashion producting row vectors together rather than
column vectors. U is supposed to be our polynomial time quantum
computation. We can view it as resulting from matrix multiplying
layers L1...L_p(n) where each layer is either a 2^n by 2^n
permutation matrix or a kronecker product of our allowed base
unitary gates (which we usually think of these as fixed sized
matrices, maybe a 2x2 hadamard gate, controlled-not gates, etc).

To be able to simulate the computation, it turns out boils down
to being able to compute the i,j entry of U efficiently. To
calculate the i,j entry of a layer in the case it is a
permutation matrix is straightforward. In the case it is a
Kronecker product layer one will tend to need to compute a
polynomial sized product of complex numbers to get the i,jth
entry. To compute the i,j th entry of two layers L and L' one
will need to compute Sum^{2^n}_{k=1}l_{ik}l'_{kj}, an exponential
sum. So the operations we need to simulate are of the right size
to be done in the counting hierarchy. Note I am sweeping under
the rug how to handle the complex numbers but this turns out not
to be where the hardness is coming from.

The main question is when is it easier to compute the i,jth entry
of U? Basically, if the layers of U are matrices in which there
are only polynomially many non-zero entries in each row and
column and those entries are polynomial time to find, then the
whole computation will be in PTIME. What kind of layers satisfy
this condition? Well, for instance a permutation matrix will only
have one 1 in each row or column. Layers made from reversible
classical gates also don't tend to present a problem. Bad layers
result from doing things like Kronecker powering more than
logarithmically many Hadamard gates together or, for instance,
having a layer that did the quantum fourier transform. After
creating such a layer it seems hard to avoid brute force
computation of an exponential sum to figure out the i,j th entry
one would get when we multiply this against another in a typical
quantum algorithm

Chris Pollett

Robert Tucci

unread,
Dec 6, 2002, 2:37:10 PM12/6/02
to sci-physic...@moderators.isc.org

Nicolaas Vroom <nicolaa...@pandora.be> wrote in message news:<3DEE0367...@pandora.be>...

>What is your definition of scalar multiplications ?
>What is your definition of matrix multiplications ?

In linear algebra (vector spaces),
one speaks of
matrices(linear transformations)
and scalars.
For example, think of scalars
as complex numbers or 1 by 1 matrices.

> Suppose you have to perform the following program:
> 1 for i = 2 to 100000
> 2 for j = 2 to i-1
> 3 A(j) = A(j) * B(j) / C(j)
> 4 A(j) = A(j-1) + A(j) + A(j+1)
> 5 next j
> 6 next i

I think this "code" is misleading.
What I recommend is that you study
the Quantum Fourier Transform.
A simple introduction to it can be found,
for example, in Mermin's class notes
http://www.ccmr.cornell.edu/~mermin/qcomp/CS483.html
Then study the classical FFT.
An introduction to that may be found, for example,
in the Numerical Recipes book, available online at
http://www.nr.com/
Now study how these guys count
the number of "elementary"
operations in each.

Sig--------------------------


A Petition to the American Physical Society for the Creation of a New

Acronym (RINNO)
http://www.ar-tiste.com/qcomp_onion/rinno.htm

Greg Kuperberg

unread,
Dec 6, 2002, 9:49:31 PM12/6/02
to
In article <9VFH9.3143$Eu4.66...@newssvr13.news.prodigy.com>,

Chris Pollett <cpol...@yahoo.com> wrote:
>BQP can be viewed as being contained in a parallel computation
>class as it is contained in PSPACE and the latter class can be
>characterized as what you can do in polynomial time if you had
>access to polynomial in the input many processors.

When composing my previous posting I had trouble characterizing PSPACE in
my mind this way. However, I do remember the result that a sufficiently
general tree search with alternating quantifiers is PSPACE-complete -
e.g. "generalized chess" is a PSPACE-complete problem. Clearly such
a search may be done in polynomial time if you can fork processes
for free. Conversely, polynomially many processors can be simulated in
polynomial space. So yes, PSPACE is a natural model for fully parallel,
polynomial-time computation. Thanks for mentioning this observation.

It is certainly important to know that BQP is contained in PSPACE.
However, the proper way to understand that is that BQP is *subsumed*
by parallel computation. To say that "quantum computation is a kind
of parallel computation", or even "quantum computation allows massive
parallelism", leaves many people with the impression that BQP = PSPACE,
or at least that BQP is not much smaller than PSPACE. I'm sure you
would accept the conjecture that BQP is much smaller than PSPACE (for
one thing because it's contained in AWPP, as you say).

My point is that conceptually BQP is a lot like BPP (the most general
randomized polynomial-time class), only bigger. At least, that's my
conception of it.

You could also point out that BPP is contained in PSPACE. But most people
have no trouble understanding that randomized computation is parallel
only in a very weak sense. To be sure, it is natural to conjecture
that BPP = P and that BQP != P. On the other hand, it is also natural
to conjecture that randomization yields a quadratic speedup for some
problems over determinism (maybe that's even a theorem). People should
appreciate that phenomenon as a warmup to understanding BQP.

Suresh __NoJunkMail kumar

unread,
Dec 9, 2002, 2:02:17 AM12/9/02
to sci-physic...@moderators.isc.org

Hi Nicolaas

First, let me say that i m just an ordinary student of
physics. The people here have a lot more experience
than i do.

Anyways, i will give u some pointers.

Quantum Computers do not function like classical
computers and so you cannot program them like
classical computers. Things like loops, if statements,
etc do not directly make sense.

Basically, here a description of a Quantum Computer
and how it is supposed to work. The description is a
little bit amateurish.

In Quantum mechanics, we talk about ensembles and how
things can exists in superposition of states.

For example. Here's the classical analogy

Imagine the boolean gate OR Function

OR(X,Y) = X' * Y + X * Y + X * Y'

Basically, according to classical boolean theory,

OR(X,Y) will be a 1 if X'*Y is 1 , that is to say
X=0, Y = 1, or X*Y is a 1, X=1,Y=1, and of if X*Y' is
1, that is X=1, Y=0.

Basically, u also need to notice that X'*Y is
orthogonal to X*Y'

that means X'*Y*X*Y' = 0 which is true because X'*X =0
and Y'*Y=0

Now, the above expression can be rewritten as follow

OR = |1><0,1| + |1><1,0| + |1><1,1| + |0><0,0|

On a classical computer, at one time X can be either
0 or 1, and so does Y, and so is the output only
allowed to be 1 or 0.

Now, on the otherhand, in Quantum mechanics,
particles, can exists in superposition of states, that
means that,

X can be expressed as a combination of 0 or 1.
|X = c_x0 |0 + c_x1|1

Basically, |c_x0|^2 is taken to be the probability of
X being a 1. That means that if you measure the state
of a particle |X>, you are gaurrented to get 0 or 1,
not both at the same time. However, you cannot predict
with certainty which state you are going to get.
However, the only certainty is probability of getting
0 at X is |c_x0|^2 and probability of getting 1 is
|c_x1|^2

Moreover, interestingly, c_x0 can also be complex
numbers. After all, the probability is only governed
by |c_x0|^2.

So, you may wonder if X what it means to say that X is
in superposition of both 1 and 0. There are many
interpretations. One interesting one is the many world
interpretation.

It says that when a particle is prepared in a
superposition of states, like our |X>. the world
diverges with X=0 with |c_x0|^2 and X=1 with
|c_x1|^2.

where |c_x0|^2 + |c_x1|^2 = 1

And we can also express Y as follows
|Y> = c_y0 |0> + c_y1|1>

Now, if you increase the number of input variables,
for example say n = 128, you have in fact, prepared a
system with 2^128 states. However, if you try to
measure the state of |X_1,X_2,...X_n>, you will only
get one of all possible 2^128 states. Because of this
dynamics, a Quantum computer can try 2^128 states in
parallel at the same time.

Now, our expression is still given by

OR(X,Y) = X' * Y + X * Y + X * Y'

However, you need to understand since X and Y are no
longer binary, OR(X,Y) is not gaureented to be 1 or 0.
Rather OR(X,Y) has a probability of being 1 or 0.

Upto now, you will be inclined to ask, that well, how
it is useful. After all, you are preparing a system in
2^128 states. And you get only one answer. So, you may
ask, how you can get the answer you are looking.

Now, Quantum Mechanics is not so simple, as the worlds
diverging. In fact, the worlds can interfere and
cancel themselves out.

|c_x0|^2 is the probability of getting x=0,however,
c_x0 can also be a complex number. So, it has phase.
c_x0 = p_x0 exp(i theta_x0).

This makes matters a lot of complicated.

The innocent looking presence of phase allows the
different worlds to interfere.

Let's go back to our expression

OR = |1><0,1| + |1><1,0| + |1><1,1| + |0><0,0|

Our OR gate, can be seen as an evolution device. In
Quantum Mechanics,the word used to denote an OR like
device is 'operator'. Basically, the application of
the OR gate to the quantum sytem input system |X,Y> ,
evolves it from that state to the state at the output.

After all, this is even classical gates do. Given an
input state, they take it to the

output state.

Moreover, just as our classical device, |k><i,j| is
othogonal to |k'><i',j'|

where k' != k and i' != i and j' != j. Where a != b
means a is not equal to b.


In Quantum mechanics, there is a requirment for
probability conservation

Basically, it goes something like this

sum_i |c_xi|^2 = 1

This is true of any probability theory. The sum of
probabilities is always
1. After all, this is an axiom of probability theory.

There is another equivalent statement called the
Unitary requirement that has the same effect as
probability conservation.

Our expression
OR = |1><0,1| + |1><1,0| + |1><1,1| + |0><0,0|

[Thanks to Kevin Scaledeferri, for pointing out the
correct form of the expression]

Suppose we rewrite our OR expression as

OR = a[01->1] |1><0,1| + a[10->1] |1><1,0| + a[11->1]
|1><1,1| + a[00->0] |0><0,0|

Comparing the 2 expression above, we can decipher that

a[ij->k] = 1.

However sum_[ij->k] |a_[ij->k]|^2 /= 1. And so it
does not converse probability.

However, for example,

OR = (1/sqrt(4)) ( |1><0,1| + |1><1,0| + |1><1,1| +
|0><0,0|)

does

a_[ij->k] = 1/2

sum_i |a_[ij->k]|^2 = 1


Now, suppose multiply |X> and |Y>, we get
|X>|Y> = c_x0 c_y0 |X=0>|Y=0> + c_x1 c_y0 |X=1>|Y=0>
+ c_x0c_y1
|X=0>|Y=1> + c_y1 c_x1|X=1>|Y=1>

othewise written as,

|X,Y> = c_x0 c_y0 |0,0> + c_x1 c_y0 |1,0> +
c_x0c_y1|0,1> + c_y1 c_x1 |1,1>

Where |c_x0 c_y0|^2 is the probability of getting a
X=0,Y=0. It is also consitent wth |c_x0 c_y0|^2 =
|c_x0|^2|c_y0|^2. This expression means that p(X = 0
and Y=0) = p(X=0) p(Y=0).

This is a natural expression for the probability of
independent events.

Now, back to our OR operator

Now, if i apply the OR operator to our input state
|X,Y>

OR|X,Y> = (1/sqrt(4)) ( |1><0,1| + |1><1,0| +
|1><1,1| + |0><0,0|)

c_x0 c_y0 |0,0> + c_x1 c_y0 |1,0> + c_x0c_y1|0,1> +
c_y1 c_x1 |1,1>


the resulting expression is

= (1/2) |1> ( c_x1 c_y0 + c_x0c_y1 + c_y1
c_x1 ) +
(1/2) |0> ( c_x0 c_y0)


So, the probability of getting a 1, is

|<1|OR|X,Y>|^2 = (1/4) | c_x1 c_y0 + c_x0c_y1 + c_y1
c_x1|^2

= 1/4 |p_x1p_y1 exp(i (theta_x1 + theta_y1) ) +
p_x1p_y0 exp(
i(theta_x1 + theta_y0)) + p_x0 p_y1 exp( i(theta_x0 +
theta_y1))|^2

In the above expression, it is important to notice
that the phases interfere with each other and cancel
each other out. This is what creates all the
difference in the world. The phases are not innocuous
as we had thought them to be.

So, not only do the worlds seperate at each instant,
they also interfere with each other. So, voila,
parallel computing.

----
Revisible Computing and answers to NP-completeness

A example of factorization
----

Basically, suppose we take a (2n)-bit multiplier with
2 n-bit multiplicand with (2n)-bit output. Now,
suppose be interested to know a few things. We take
the output of the (2n-bit) multiplier and put a
big AND gate at end. The output of this gate is true,
if in fact, an input combination of mutiplicand
produces the the number to be factored at the output.
Suppose We are interested in factoring
[O_0,O_1,O_2...,O_n] = [0,1,1,0...,1] . Let's call
this number N . Our AND gate basically, is
O_0'*O_1*O_2*O_0'...*O_n

In normal binary computers, certain operations are not
reversible.

For example, back to our OR gate.

Suppose X=0,Y=1. The output of the OR gate is 1.
However, based on the output, we cannot tell if the
input was X=0,Y=1, or X=1,Y=1 or X=1,Y=0. So, you can
even say that information is lost from inputs to the
outputs. Suppose, i have a random binary generator at
the input. Based on the output of a classical gate,
mostly impossible to tell what the inputs of the
circuit were.

Now, the idea of Quantum Computing, leads to
reversible computing. [I m not an expert on this topic
either. I am not sure if Quantum Computing requires
reservsible computing]. However, a Quantum
equivalent of 2-bit AND Gate, has 4 not 3 connections
coming in and going out.

a Quantum 2-qubit AND gate needs to have 2 inputs, one
output, and one revisibility channel.

So, now, suppose, we have the quantum equivalent of
this classical multiplier, because of reversibility
of we can propogate the 1 from the output of our AND
gate, in our multiplier, and trace them back
to the input combinations that generate the 1 at the
output. Notably, these combinations are also factors
or our number N.

Shor's algorithm is different. But, hopefully, this
helps.

-suresh

__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

Nicolaas Vroom

unread,
Dec 11, 2002, 8:38:41 PM12/11/02
to physics_research
Peter Shor wrote:
>
> I just noticed this topic, and I'm going to try to answer a few of the questions.

>
> Nicolaas Vroom <nicolaa...@pandora.be> writes:
> > Unfortunate it is still not clear why you need Shor's Algorithm
> > to factoring a number.
> > For example 15 = 3 * 5.
>
> You don't need Shor's algorithm to factor a number ... you just factored
> 15 without even using a computer.

After studying Shor's algorithm and quantum computers
(I'am still in that process) three items are still not clear to me:
1. Why does one need FFT in order to factor a number
2. Why are quantum computers soo much faster
3. Shor's algorithm does not always work.
4. Why not implementing the whole algorithm on a quantum computer.

See previous posting and study te program
http://users.pandora.be/nicvroom/findprim.txt
http://users.pandora.be/nicvroom/findprim.zip

> > At page 51 of Scientific November 2002 is written:

> > The best classical algorithm would take about 5*10E24 steps.
> > By taking advantage of innumerable quantum states
> > a quantum factoring algorithm would take only 5*10E10 steps"
>

> So in theory, you can factor a 300-digit number with a classical computer,
> we just don't have any classical computers fast enough.
Not only in theory but also in practice.
The major issue is time.

> And 500-digit numbers,
> without truly signifiant advances in either classical computing
> power, in algorithms, or in quantum computers, are out of reach.
For quantum computers it is still only in theory

> Nicholas Vroom also asked


>
> > My question is why are both

> [e.g., the classical and quantum number of steps needed


> > not the same.
> > How do we explain that a quantum computer requires
> > such a relative small number of steps.
>

> There are computational paths from a number to its factors that only a quantum
> computer can navigate. Quantum computers can perform steps that might be
> described in English as "rotate qubit 2 by 90 degrees coherently conditioned
> on qubit 1 being 0," which translates in mathematics to applying a specific
> 4x4 matrix on the quantum state space of qubits 1 and 2, and in a specific
> quantum computater implementation might translate to applying a fixed sequence
> of properly shaped laser pulses to ions in an ion trap.
All what you do here should be clear and unambiguous.
If it is clear than you can also do it on a classical computer.

> Try and do this kind
> of step on a classical computer. A good question here is: how does this
> kind of step help?
Exactly
Why do you need this.
Does it work in practice. How far are we in the laboratory ?

> For that, you'll have to look more closely at a specific
> quantum computing algorithm.
>
> There are some fairly good elementary explanations of how these work at
> http://www.qubit.org

I studied also as suggested by Robert Tucci:
http://www.ccmr.cornell.edu/~mermin/qcomp/CS483.html
Specific chapter 1 and chapter 3
I'am not convinced that a quantum computer is so much richer.
I'am not sure if you can calculate 3 * 5 reliable on a quantum
computer.

I will address questions to this on a separate posting

> > Suppose this is the best classical algorithm:
>
> and gives pseudo-code for trial division.
>
> The computer science side of me absolutely insists on saying something here.
> This is nowhere near the best algorithm,
There are two issues involved:
1. a) Start from a given algorithm, b) implement that algorithm on both
a classical computer and a quantum computer c) calculate solution time
i.e. speed
2. Find the fastest algorithm which solves a given problem for both


a classical computer and a quantum computer.

The pseudo-code addresses the first issue.
i.e. I want to understand if and how quantum computers are faster
using the same algorithm (program)

> and while it is possible to find
> slower algorithms (e.g., simulating the quantum factoring algorithm on
> a classical computer), trial division takes around 10E150 steps for a
> 300 digit number, as opposed to the 10E24 quoted above for (I assume) the
> number field sieve algorithm. Notice that the speedup gained by clever
> use of classical algorithms--126 orders of magnitude--far exceeds the
> speedup gained by switching to a quantum algorithm--only(!) 14 orders of
> magnitude. Both of these speedups increase dramatically as the number to
> be factored grows larger.

Conclusion:
Speed up gained by sieve algorithm is 126 orders.
Speed up gained by quantum algorithm is 140 orders.
How do you explain that quantum algorithm is better than
sieve algorithm ?
Is this more than only a prediction ?
It is IMO easy possible that factoring a number
on a quantum computer plus classical computer combination is slower
than performing the same factoring on a classical computer
which uses the sieve algorithm.


> |> And more specific why this algorithm works faster on a quantum computer
> |> compared with a classical computer.
>

> As I said above, there are steps you can do on a quantum computer you
> just can't do on a classical computer.

This is still not clear to me.
It should be possible to simulate any program that runs on a quantum
computer
on a classical computer.
I'am waiting to read why my findprim.exe program is "wrong".

Nick
http://users.pandora.be/nicvroom/

Kevin A. Scaldeferri

unread,
Dec 15, 2002, 11:05:03 PM12/15/02
to
I may get around to addressing more of this later, but for now I want
to say something about this one statement

In article <3DF5BFE2...@pandora.be>,


Nicolaas Vroom <nicolaa...@pandora.be> wrote:
>It should be possible to simulate any program that runs on a quantum
>computer on a classical computer.

Yes, this is true. However, the simulation is not efficient.

There are many models of computation which are equivalent, meaning
that they can simulate each other. However, computational equivalence
doesn't mean the complexity is the same. Frequently the simulation
involves an exponential increase in the time or space requirements.

Nicolaas Vroom

unread,
Dec 15, 2002, 11:15:26 PM12/15/02
to physics_research
Retransmission of previous reply

Nick
-----------------------------------------------------

Nicolaas Vroom wrote:
> 4. IMO the best one is:
> http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf
> This document explains in detail:
> Example factor n = 55
>

After studying this document IMO Shor's algorithm
1. works for the following factors:
5 * 11 =55
3*17, 3*19
5*7, 5*11, 5*13, 5*17,
7*13, 7*17, 7*29
17*31
Because the number of numbers (a mod x) in step 2 until you
find the number 1 is odd.
y is the number in the middle.
After finding y you can proceed to step 6.
For example for 5 * 11 = 55 those numbers are:
13,4,52,16,43,9,7,36,28,*34*,2,26,8,49,32,31,18,14,17
y=34
For example for 3 * 17 = 51 those numbers are:
13, *16*, 4
y=16

2. does not work for the following factors:
5*29
Because the number of numbers (a mod x) in step 2 until you
find the number 1 is even.
For example for 5 * 29 = 145 those numbers are:
16, 111, 36, 141, 81, 136

3. does not work for the following factors:
3*5, 3*11, 3*13
5*19, 5*23
7*11, 7*19, 7* 23
19*31
Because there is no 1
For example for 3*5=15 those numbers are:
9,6,9,6,9,6,9,6,
For example for 3*11=33 those numbers are:
12,12,12,12,12
For example for 7*11=77 those numbers are:
14,42,49,70,56


4. The program listing is at:
http://users.pandora.be/nicvroom/findprim.txt
http://users.pandora.be/nicvroom/findprim.bas
To get the program in Quick Basic change txt into bas

5. IMO Shor's algorithm is no quarantee that factoring works.
(It is clever, but I doubt if it is that fast)

6. Shor's algorithm can not use as prove that quantum computers
are faster as classical computers.

Nick

Robert Tucci

unread,
Dec 15, 2002, 11:23:09 PM12/15/02
to
Peter Shor, using his mysterious Klingon/ATT cloaking device that
endows him with Google invisibility, gave an analogy about how the
"ferry path" is inaccessible to a classical commuter but possible for
the quantum commuter. This is misleading, in my opinion. Let NB be the
number of bits. All the ~2^NB paths that a qubit circuit follows are
accessible and can be followed individually by a classical computer.
It's just that they cannot all be advanced upon concurrently, unless
one had exponential (in NB) classical computing resources.

(performing a classical elementary operation) = multiplying classical
state (given by a complex number) by another complex number

(performing a quantum elementary operation) = multiplying quantum
state (given by a 2^NB long, column vector of complex numbers) by a
2^NB times 2^NB elementary matrix

This suggests that sometimes one elementary op of a quantum computer
accomplishes on the order of 2^NB elementary classical operations

An elementary matrix, by definition, can only *change* the state of
one or two qubits, not more. The rest of the qubits are acted upon
without changing them. This acting upon certain qubits of the qubit
array without changing them (while at the same time changing other
qubits in the array), "is key" for vector spaces and quantum physics,
but not as key in classical physics.

Sig:-------------------------

Kevin A. Scaldeferri

unread,
Dec 16, 2002, 12:25:31 AM12/16/02
to
In article <3DEE794E...@pandora.be>,

Nicolaas Vroom <nicolaa...@pandora.be> wrote:
>1. Quantum Computing, Shor's Algorithm, and Parallelism
>Arun, Bhalla, Kenneth Eguro, Matthew Hayward
>http://alumni.imsa.edu/~matth/quant/433/shor-par/
>
>Shor's algorithm is discussed at:
>http://alumni.imsa.edu/~matth/quant/433/shor-par/node12.html
>The most details are at:
>http://alumni.imsa.edu/~matth/quant/433/shor-par/node15.html
>This page claims that in order "for factoring a given integer n"
>you need (see step 1-4) a classical computer.
>I found that very strange.

Well, you don't _need_ to do those steps on a classical computer, but
they are efficient on a classical computer. Given that the clock
speeds of classical computers are massively higher that quantum
computers for the forseeable future, it makes sense to take such a
hybrid approach.

>Starting with step 5 you need a quantum computer
>In step s discrete Fourier transform is computed.
>Why do you need that in order "for factoring a given integer n" ?
>

You don't need a Fourier transform to factor an integer. There are
many algorithms for factorization that don't use this technique at
all. However, the key in this approach is the periodicity of x^a mod
n, and you should be able to see that a Fourier transform is handy if
you want to pick out periods of a functions.

>http://alumni.imsa.edu/~matth/quant/433/shor-par/node16.html
>claims:
>"In general Shor's algorithm simulation seems a good candidate
>for parallelization."
>Why the word simulation ?

Simulation means modeling one type of machine on another. In this
case, we are talking about modeling the execution of a quantum
computer (quantum register machine) on a classical computer (classical
register machine).

>Why the word seems ?
>Does this imply that we are not sure ?

The don't prove this to be true at any point that I can see.

>After studying the above documents (I'am still in that
>process) I have the following remarks.
>1. Shor's Algorithm allows you (As far as I currently
>understand) to perform factoring of an integer.
>2. In order to do that you need both a classical computer
>and a quantum computer.

No, you could do it all on a quantum computer (in theory)

>3. It is not clear to me if you can do this whole calculation
>solely on a classical computer.

Yes, but it will be slower

>4. It is not clear to me what the advantages are by using
>a quantum computer.

Speed

>5. Assuming remark 1 is correct and assuming that using a classical
>computer and a quantum computer both together, is the best way to
>implement shor's algorithm than I'am still not convinced
>that quantum computers in general are faster
>(and eventualy will replace classical computers)

Good! No one knows that practical QCs are possible nor that the
requirements for general error correction won't outweigh the
theoretical speedup under perfect conditions.

>6. Shor's Algorithm smells like a Monte Carlo simulation.
>As part of the algorithm you have to observe register 2
>and to measure register 1 (See url in remark 4)
>It is not clear to me if you only have to do this once.

No, you might have to do it more than once. However, the speedup is
great enough that the possibility of having to repeat the calculation
a couple times is insignificant.

>7. Assuming that you have to do this more often
>what is the chance that you never find the right answer
>specific with large numbers ?

That you never find the right answer? 0
(ignoring the issues of error correction and such, which we are
generally ignoring already)

What you really want to know is, what is the chance that you don't
find the answer in an exponential number of trials. Unfortunately, I
don't know the answer off-hand, but it should be exponentially small.

>8. It is interesting to study what happens if the number
>that you want to factor (using Shor's algorithm)
>is a prime number.

Maybe, although there are efficient algorithms for determining
primality, so there isn't much point in using Shor's algorithm.

Nicolaas Vroom

unread,
Dec 20, 2002, 7:35:20 PM12/20/02
to physics_research
Peter Shor wrote:
>
> Nicolaas Vroom <nicolaa...@pandora.be> asks:

>
> > After studying Shor's algorithm and quantum computers
> > (I'am still in that process) three items are still not clear to me:
> > 1. Why does one need FFT in order to factor a number
>
> It's the only way we know of to factor a number fast on a quantum computer.
> The two-sentence intution: We factor by finding periodicity. FFT's are
> very good at finding periodicity.

In document:
http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf
Shor's algorithm is described in 6 Steps. (With large S)
However IMO you only need 2 Steps (And a GCD operation) to find
(if you are "lucky") the two prime numbers.
See http://users.pandora.be/nicvroom/shor.htm for more details.
IMO you de not need FFT's to find the periodicity.

However IMO already those 2 Steps are much slower than if you
use a classical factorization algorithm.
If you want to factorize 29*31 it requires 420 calculations to calculate
a string of numbers. Number at position 210 is the result you need.
i = 210, y = 869, p=29, q=31
A classical algorithm requires only 30 calculations.
See above url and QBasic program for details.



> > 2. Why are quantum computers soo much faster
>

> It's not that they're faster ... it's that they can do kinds of
> subroutines and algorithms that classical computers can't, so certain
> problems will take many fewer steps.

I expect you mean that quantum computers can solve FFT's in one Step
while classical computers need many steps (LOOPS).

> If you run the SAME algorithm on
> a classical computer and a quantum computer, the quantum computer will
> probably be slower
<SNIP>

> > 3. Shor's algorithm does not always work.
>

> The fastest classical algorithms for factoring are randomized, as well.
There are three types of solutions for factoring:
a) algorithms that always work.
b) algorithms that not always work
c) randomized algorithms that always work.

IMO Shor's algorithm falls in category b.
In my matrix of 43 by 43 (See url) Shor's algorithm works 58 times
and fails 20 times.
In a type c solution you can be lucky and find the solution immediate.
To compare a with c you have to test many examples in order to decide
if your type c solution is the best.

> I said:
>
> > > Try and do this kind
> > > of step on a classical computer. A good question here is:
> > > how does this kind of step help?
>
> > Exactly
> > Why do you need this.
> > Does it work in practice. How far are we in the laboratory ?
>

> You need this type of step to compute a quantum Fourier transform.

As I explained above in order to find periodicity IMO you do not need
FFT.
A complete different question is which computer is the best
to calculate FFT's: A Quantum or a Classical Computer.

I will go in more detail in a separate posting.

<SNIP>

> Finally, you have expressed this question in a way that I can
> understand it. No, quantum computers are not faster using the same
> algorithm.
Is that also true in the future ?

<SNIP>

>
> > It is IMO easy possible that factoring a number
> > on a quantum computer plus classical computer combination is slower
> > than performing the same factoring on a classical computer
> > which uses the sieve algorithm.
>

> Why?
>
> Do you not believe the prediction of the number of steps taken by a
> quantum computer using the factoring algorithm is correct, assuming
> that quantum computers can be built?
> (I don't believe I've encountered somebody holding this view before).

Yes that is my opinion. However I have to be carefull - See below.
To calculate the string of numbers in Step 2 (Of the above mentioned
document) in case of 29*31 for Shor's algorithm you need 420 steps
while on Classical Computer you need in total 30 steps.
If in addition with Shor's Algorithm you still need a FFT is of
no "importance".

I have to be carefull because in order to implement Shor's Algorithm
(all the 6 Steps) you need both a Classical computer and a Quantum
Computer.
IMO to factorize a number you do not need the Steps required
on the Quantum Computer. The 2 first Steps on a classical computer
are enough to generate the results of the 43*43 matrix.

I'am eager to know if the full 6 Steps give better performance.

> I hope this helps,
>
> Peter Shor

Many thanks for your time and effort to help me.

Nicolaas Vroom.

Nicolaas Vroom

unread,
Dec 20, 2002, 7:37:31 PM12/20/02
to physics_research
Peter Shor wrote:

> You need this type of step to compute a quantum Fourier transform.

To understand if FFT's work on a Quantum Computer (QC) I have
a number of questions.

See http://www.ccmr.cornell.edu/~mermin/qcomp/chap1.pdf page 18

1. Suppose I have a QC which contains 100 Qubits.
all with the same amplitudes amplitude a and b that
p = a^2 = b^2 = 1/2

If I measure now 50 of those Qubits than I will find on average
25 which are in state 1 and 25 which are in state 0

If I measure so time later the 50 other Qubits than I will find
(I expect) on average the same result i.e.
25 which are in state 1 and 25 which are in state 0

The reason for this question is that if we consider Qubits
which are build the same i.e. with a 50% probability
to be in state 1 and 50% probability to be in state 0
than they maintain this state.

In Schrodinger's Cat experiment this is not the case
because it starts from radio active decay element which
has a certain half life time value.
If you open the box shortly after the start of the
experiment you have a high chance that the cat is alive.
If you open the box long after the start of the
experiment you have a low chance that the cat is alive.
As such the outcome of the Schrodingers cat experiment is time
critical.

I assume that measuring Qbits is not time critical (on average)

2. Suppose I have a Register R with 3 Qubits.
What can I do with this Register

a) Can we initialize R with 8 (all) zero's?
b) Can we initialize R with the numbers 0 to 7 ?
c) Can we initialize R with 4 zero's and 4 sevens ?

In case a does this mean we alsways measure a zero ?
In case b does this mean we have the same chance
to measure a 0 or a 1 or a 2 etc or a 7 ?
In case c does this mean that we never will measure
the values 1 to 6 ?

3. Suppose I have two Registers R1 and R2
each consisting of 6 Qubits and
each initialized (the same) with the values
0,1,2,3 and 4. (with equal probability)
Now I want to Multiply those 2 Registers and store
the result in a third Register R3.
That means 0 * 0, 1 * 1, 2 * 2, 3 * 3 etc.
Does R3 contain the values 0,1,4, 9 and 16 ?
Are you sure that R3 does not contain any other value?
For example 6 or 12 (When measured)
i.e. 0*1, 1*2, 2*3, 3*4, 4*0,

The reason of this question is: are we sure that
the corresponding values are correctly selected ?
i.e. the cycle time of the two Registers R1 and R2
should be identical and stable
and the corresponding values should be correctly "linked".

Nick

Nicolaas Vroom

unread,
Dec 20, 2002, 7:39:44 PM12/20/02
to physics_research
"Kevin A. Scaldeferri" wrote:
>
> >Starting with step 5 you need a quantum computer
> >In step s discrete Fourier transform is computed.
> >Why do you need that in order "for factoring a given integer n" ?
> >
>
> You don't need a Fourier transform to factor an integer. There are
> many algorithms for factorization that don't use this technique at
> all. However, the key in this approach is the periodicity of x^a mod
> n, and you should be able to see that a Fourier transform is handy if
> you want to pick out periods of a functions.
>
If you study
http://users.pandora.be/nicvroom/shor.htm
and the simulation program
than you will see that IMO you can factorize a number with
(part of) Shor's Algorithm only on a standard PC without FFT's.
IMO Shor's Algorithm requires more steps/calculations
than a classical factorization algorithm
(For example 29*31 requires 420 steps versus 30 steps)

> >3. It is not clear to me if you can do this whole calculation
> >solely on a classical computer.
>
> Yes, but it will be slower
>

Peter Shor wrote:
No, quantum computers are not faster using the same
algorithm.

> >4. It is not clear to me what the advantages are by using
> >a quantum computer.
>
> Speed
>
See above.

Nick

Kevin A. Scaldeferri

unread,
Dec 21, 2002, 10:07:45 AM12/21/02
to
In article <3E00886B...@pandora.be>,

Nicolaas Vroom <nicolaa...@pandora.be> wrote:
>If you study
>http://users.pandora.be/nicvroom/shor.htm
>and the simulation program
>than you will see that IMO you can factorize a number with
>(part of) Shor's Algorithm only on a standard PC without FFT's.
>IMO Shor's Algorithm requires more steps/calculations
>than a classical factorization algorithm

>(For example 29*31 requires 420 steps versus 30 steps)

That's nice, but it's really beside the point. What does your
simulation say when instead of the factors being 29 and 31, they are 29
and 31 digits long? Or 290 and 310 digits?

Because your later comments in the post lead me to believe that you
are confused on this point, let me explain that when computer
scientists talk about the "speed" of an algorithm, they usually mean
the asymptotic scaling of the algorithm for large inputs.

However, when

>Peter Shor wrote:
>No, quantum computers are not faster using the same
>algorithm.

He was actually talking about the fact that the clock speeds of
quantum computers are much lower than classical computers, for the
forseeable future.


When I said:

>
>> >3. It is not clear to me if you can do this whole calculation
>> >solely on a classical computer.
>>
>> Yes, but it will be slower
>>

I was referring to the fact that a classical computer can't truly
execute the same algorithm as a quantum computer. When a classical
computer simulates Shor's algorithm, it is running a different
algorithm which scales much worse.

This is a little subtle, since people are often a little sloppy with
the terminology. However, to really talk about the complexity of an
algorithm, you need to know the machine architecture well enough to
know what the fundamental operations are.

Nicolaas Vroom

unread,
Dec 23, 2002, 5:59:01 PM12/23/02
to physics_research


"Kevin A. Scaldeferri" wrote:
>
> In article <3E00886B...@pandora.be>,
> Nicolaas Vroom <nicolaa...@pandora.be> wrote:
> > If you study
> > http://users.pandora.be/nicvroom/shor.htm
> > and the simulation program

> > than you will see etc.

> >
> >(For example 29*31 requires 420 steps versus 30 steps)
>
> That's nice, but it's really beside the point.

IMO this is a very important statement
Shor's Algorithm consists of three parts:
1. Pre Processing. 2. FFT 3. Post Processing.
The Pre and Post Processing part are performed on a classical
comp. The FFT part is performed on a quantum post.
IMO in order to do factorization you do not need the FFT part.

See: http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf
1. The preprossing part involves the following parameters:
n, q, x and a
n is de number to factorize. q = 2^x 2*n*n < q < 3*n*n
For example: n = 5*11=55 x = 13 and q = 8192
n = 10*10 x = 15 q = 32768
n = 29*31 x = 21 q = 2097152
n = 100*100 x = 28 q = 268435456
Step 2 involves modular exponentation i.e.
the calculation of x^a mod n with a going from 0 to 8191
This are 8192 values, calculations or 8192 steps !!!
Those steps are required to initialize FFT on the
quantum comp.
This number of preprocessing steps (=q) is gargantuan
and IMO not necessary.
As soon as when x^a mod n = 1 you can stop because
all the x^a mod n values are a repetion.

In the case of 5 * 11: x^a mod n = 13^20 mod 55 = 1
a/2 = 10 13^10 mod 55 = 34 = y
GCD of (y-1 and 55) = (33 and 55) = 11
GCD of (y+1 and 55) = (35 and 55) = 5

In case of n = 29 * 31: 21^420 mod n = 1
a/2 = 210 21^210 mod n = 869 = y
That means you still need 420 calculations
while classical factorization requires 30 calculations.

In case of
97*89 a = 353 y = 6498 classical 89 calculations
89*101 a = 2200 y = 7475 classical 89 calculations
97*101 a = 800 y = 4849 classical 97 calculations
prediction:
10^150*10^150 classical 10^150 calculations a > 2*10^150 q > 10^600
What is worse Shor's algorithm does not always work

What I still do not understand that the artical by
Scientific American claims that only 5*10^10 steps are required.
Apparently they do not take the preprocessing into account.

> What does your
> simulation say when instead of the factors being 29 and 31, they are 29
> and 31 digits long? Or 290 and 310 digits?

See above.
All the numbers increase, but preprocessing is the worst.



> Because your later comments in the post lead me to believe that you
> are confused on this point, let me explain that when computer
> scientists talk about the "speed" of an algorithm, they usually mean
> the asymptotic scaling of the algorithm for large inputs.
>
> However, when
>
> >Peter Shor wrote:
> >No, quantum computers are not faster using the same
> >algorithm.
>
> He was actually talking about the fact that the clock speeds of
> quantum computers are much lower than classical computers, for the
> forseeable future.

I understand that.
In the case of 5 * 11, Step 4 "Discrete Fourier Transform of
Register 1" requires on a Classical Comp. q * q/20 =
= 8192 * 410 calculations = 32*10^5 calculations
In case of 29*31 : q * q/420 = 2097152 * 2097152/420 calculations

On the quantum computer this requires ALWAYS (?) 1 calculation.
Even if this ALWAYS takes 1 second very soon with more digits
the quantum computer will outperform FFT calculations
on a classical computer.

The most important to ask question is: Does FFT work on a QC ?

> When I said:
>
> >
> >> >3. It is not clear to me if you can do this whole calculation
> >> >solely on a classical computer.
> >>
> >> Yes, but it will be slower
> >>
>
> I was referring to the fact that a classical computer can't truly
> execute the same algorithm as a quantum computer. When a classical
> computer simulates Shor's algorithm, it is running a different
> algorithm which scales much worse.

If you want to compare you have to do also the pre processing
and the post processing on a Quantum Computer.

The question is than how many calculations are involved to do
Step 4: 1? The same as on a classical computer ?

A different question is how many steps are involved
and how long does it take to do a full factorization
with a classical algorithm on a Quantum Computer (without FFT.)
without any optimization (sieve)
i.e. are this 10^150 calculations for two 150 digit numbers?

Nick

Peter Shor

unread,
Dec 29, 2002, 2:30:50 PM12/29/02
to sci-physic...@moderators.isc.org

Step 2 isn't preprocessing in the quantum factoring algorithm;
it's intrinsically part of the quantum piece of the algorithm.
If you want to simulate the quantum algorithm on a classical
computer, you do indeed need these 8192 calculations. If
you had a quantum computer, you would only do this calculation
one time, on a superposition of 8192 different inputs, and end
up with a superposition of 8192 different values, which gets
fed into the FFT.

... long sections snipped ...

> A different question is how many steps are involved
> and how long does it take to do a full factorization
> with a classical algorithm on a Quantum Computer (without FFT.)
> without any optimization (sieve)
> i.e. are this 10^150 calculations for two 150 digit numbers?

Saying that the quadratic (or number field) sieve is an optimized
version of trial division is kind of like saying that an
automobile is an optimized version of a donkey cart. They work
using completely different principles; if you were programming
them, the only code you could reuse is the code for multiplication
and division of large numbers. I would recommend reading a good
explanation of the quadratic sieve. Unfortunately, the only one
I know of offhand (Brain Hayes' article "The magic words are
squeamish ossifrage") is not available online.

Peter Shor

Mike Stay

unread,
Dec 30, 2002, 2:03:22 AM12/30/02
to sci-physic...@moderators.isc.org

peter...@aol.com (Peter Shor) wrote in message
> I would recommend reading a good
> explanation of the quadratic sieve. Unfortunately, the only one
> I know of offhand (Brain Hayes' article "The magic words are
> squeamish ossifrage") is not available online.
>
> Peter Shor

I found this link to be helpful:
http://www.math.ksu.edu/math511/notes/iqs2000.htm

Nicolaas Vroom

unread,
Jan 2, 2003, 7:11:42 PM1/2/03
to

Peter Shor wrote:


>
> Nicolaas Vroom <nicolaa...@pandora.be> wrote:
>
> > Step 2 involves modular exponentation i.e.
> > the calculation of x^a mod n with a going from 0 to 8191
> > This are 8192 values, calculations or 8192 steps !!!
> > Those steps are required to initialize FFT on the
> > quantum comp.
> > This number of preprocessing steps (=q) is gargantuan
> > and IMO not necessary.
>
> Step 2 isn't preprocessing in the quantum factoring algorithm;
> it's intrinsically part of the quantum piece of the algorithm.
> If you want to simulate the quantum algorithm on a classical
> computer, you do indeed need these 8192 calculations.

I doubt that.
If you test if x^a mod n = 1 (for a from 0 to 8191) you can stop.
Than you know the periodicity.

13^20 mod 55 = 1

y = 13^10 mod 55 = 34
GCD (33,55) = 11 GCD(35,55)= 5
See http://users.pandora.be/nicvroom/shor.htm for details.

In total there are four possibilities:
Two that work and two that are in error.
(See above url for details)

In case of an error you can also test other values of x.
For example you can also try x = 14 and x = 15
On a quantum computer you get then a superposition of resp
16384 and 32768 different values.
In case of x = 14: 14^10 mod 55 = 1 ("Same" as for x=13)
In case of x = 15 the string of values is:
0,1,15,5,20,25,45,15,5,20,25,45,15 etc etc


> If
> you had a quantum computer, you would only do this calculation
> one time, on a superposition of 8192 different inputs, and end
> up with a superposition of 8192 different values, which gets
> fed into the FFT.

(In case of n = 55 you get 20 different values and each value
roughly 410 times)

I call this time: time1

In case of 300 digit number x is roughly 600 and q = 2^600
that means you only do the calculation one time (time2)
on a superposition of 2^600 different inputs, and end up etc.

The question is are time1 and time2 identical ?
If not what are they ?

In the original article by Scientific American they write
that a quantum algorithm (to find the prime factors of a 300
digit number) would take only 5*10^10 steps, or less than a
second at tetrahetz speed.

The question is: how do those 5*10^10 steps compare with the 6
Steps of the article under discussion i.e. the above mentioned
time (Step 2) and the time to perform the FFT ?

Nick

Anatoly Vorobey

unread,
Feb 7, 2003, 9:33:22 PM2/7/03
to
In article <H70vn...@research.att.com>, Peter Shor wrote:

> Some poor soul not cited by Anatoly Vorobey wrote:

>> It is IMO easy possible that factoring a number
>> on a quantum computer plus classical computer combination is slower
>> than performing the same factoring on a classical computer
>> which uses the sieve algorithm.

> Why?
>
> Do you not believe that quantum computers will ever be built?
> (a reasonable point of view).
>
> Do you not believe that quantum mechanics is correct?
> (a view shared by a number of otherwise reasonable people, but one
> which I think requires ignoring a great deal of evidence).

Is it possible that nature might turn out to not be indifferent to the
number of states in a superposition?

I mean, one "naive" interpretation of how quantum factoring and other
quantum algorithms work is that we take advantage of the fact that
when we act on a superposition of states our action is performed
"in parallel" on each state in the superposition, and we are kind of
gaining massively parallel execution without paying anything for that,
right? It's as if the nature was "storing" huge, in fact
unbounded in principle, amounts of "data" for us. So my question is,
is it possible that in reality the nature might turn out to be limited
in terms of how much data it can "store" in this way? Could the
theory of quantum mechanics be changed so that superpositions with very
very large number of states in them would be outlawed, and the theory
would still remain logically coherent? Perhaps by positing that such
"large" superpositions would collapse as if observed, or in some other
manner?

Forgive me if these questions are stupid or too naive, I'm but an
amateur.

--
Anatoly Vorobey,
my journal (in Russian): http://www.livejournal.com/users/avva/
mel...@pobox.com http://pobox.com/~mellon/
"Angels can fly because they take themselves lightly" - G.K.Chesterton

Ralph Hartley

unread,
Feb 15, 2003, 3:52:09 AM2/15/03
to
Anatoly Vorobey wrote:

> Is it possible that nature might turn out to not be indifferent to the
> number of states in a superposition?

I'm not going to say anything is *impossible*, but I would say *highly*
implausible.

You should note that in Quantum theory, the "number of states in a
superposition" is not a physical quantity. It depends strongly on your
choice of a basis. Looked at one way, you might have a superposition of a
very large number of states, but just by *thinking* about it differently,
it becomes a single state with no superposition involved at all.

The "Quantum" answer is no, it isn't possible.

> we are kind of
> gaining massively parallel execution without paying anything for that,
> right? It's as if the nature was "storing" huge, in fact
> unbounded in principle, amounts of "data" for us.

Maybe "kind of". Quantum and clasical information are sort of like Dollars
and Soviet Rubles used to be. Dolars were obviously better than Rubles, but
if you wanted to covert Dollars to Rubles (or buy something priced in
Rubles) you had to accept the official 1 to 1 exchange rate. The universe
has no black market.

It *looks* like nature *must* be storing lots of data, but when you try to
retrieve that data, you only get one classical bit per qbit. (Similarly, by
violating Bell's inequality, it *looks* like nature *must* be transmiting
information instantainiously, but if *you* want to send some, you can't.)

> Could the
> theory of quantum mechanics be changed so that superpositions with very
> very large number of states in them would be outlawed, and the theory
> would still remain logically coherent? Perhaps by positing that such
> "large" superpositions would collapse as if observed, or in some other
> manner?

I doubt it, and I'm pretty sure such a theory wouldn't look much like
quantum mechanics. Ordinary quantum effects typically do involve
superpositions of very large (if not infinite) numbers of states.

For instance, light can be focused to a fine point, but can also be
prepared in a state with a very small uncertianty in momentum. In quantum
mechanics, position states can be described as infinite superpositions of
momentum states, and vice versa.

For what you are suggesting to happen, one type of states (lets assume for
concreteness that it is position states) would have to be the "real" states
and the other (momentum) the "superpositions". If there were a bound on the
number of "real" states in a superposition, there would be a limit to how
precicely the "superpositions" could be measured. In our example that would
be a limit to how coherent a laser could be.

I don't know what the experimental limit on such an effect would be, but I
would guess that it is *very* tight. Probabally tight enough that it
wouldn't bother any practical quantum computer (there *are* lots of things
that *do* make a quantum computer hard to build).

Ralph Hartley


michaelprice

unread,
Feb 21, 2003, 12:25:18 AM2/21/03
to

"Ralph Hartley" <har...@aic.nrl.navy.mil> wrote

> It *looks* like nature *must* be storing lots of data, but when you
> try to retrieve that data, you only get one classical bit per qbit.

This has always made me sceptical about whether quantum
computers could ever work in theory, even if the practical problems
could be over come (which I'm sure they will, eventually).

This point often seems overlooked by proponents of quantum computers. I
have never seen a cogent explanation of how the result is read out of a
quantum computer, in a fashion that is superior to a classical computer.

Bucksbaum's "Library of Congress in an electron" claim fails at this reading
out stage; this major difficulty is only obscured because his experiments
are operating on millions of identically prepared electrons. By operating
on such a large number of electrons or atoms Bucksbaum' is able to exploit
the million-fold parallelism present and thereby extract information to
digitally reconstruct the electrons' wavefunctions. In this experiment,
information recovery is limited 3 bits per electron - each electron in his
setup can occupy one of 8 levels. The more atoms used the more information
they can recover.

In effect they haven't got a new-fangled quantum computer but an
old-fashioned parallel-processing computer.

Is this a generic problem with all quantum computers?

Cheers,
Michael C Price

http://mcp.longevity-report.com
http://www.hedweb.com/manworld.htm

Nicolaas Vroom

unread,
Mar 25, 2003, 10:26:40 PM3/25/03
to physics_research


Anatoly Vorobey wrote:
>
> In article <H70vn...@research.att.com>, Peter Shor wrote:
>
> > Some poor soul not cited by Anatoly Vorobey wrote:
>
> >> It is IMO easy possible that factoring a number
> >> on a quantum computer plus classical computer combination is slower
> >> than performing the same factoring on a classical computer
> >> which uses the sieve algorithm.
>

I wrote that in message 7 in this thread.

[Moderator's note: "message 7" is not anything well-defined -- KS]

But unfortunate after studying documents related to quantum computers
my understanding of how quantum computers operate is becoming less.

My basic reading are the eight documents mentioned in:
http://users.pandora.be/nicvroom/shor.htm
specific the documents 1, 6, 7 and 8.
I have also improved my similation program of the 6 steps
outlined in document 1.
http://users.pandora.be/nicvroom/progrm19.htm

One very interesting factorization numbers is n = 85.
Step 1 with n=85 gives x=14 and q= 2^x=16384
Step 2 gives the following sequence of 16 numbers:
1,14,26,24,81,29,66,74,16,54,76,44,21,39,36,79
and again.
This sequence repeats itself 1024 (=M) times.
(GCD a=8, y=16, p = 17 and q = 5)
Step 3 you will measure in Register 2 one of those values.
For example try 1
Step 4 is Fast Fourier Transform.
with c going from 0 to q-1 = 16383
Step 5 you will measure Register 1.

The question is what can you measure and what are the probabilities.
My simulation program shows that the FFT values
for c=0 we get 1024, for c = 1024 : 1024
for c = 2048: 1024 for c = 3072: 1024 etc
ie. for c = n*1024 we get 1024, with n going from 0 to 15
All the intermediate values are all zero.
See http://users.pandora.be/nicvroom/shorfft.htm
The chance p(c) are in all equal to 6.25
ie. for c = n*1024 we get p(c) = 6.25 with n going from 0 to 15
The grand total of all those values 1s 100.
I do not known if that is correct.

The question is what does the value 6.25 means.
Accordingly to litterature it means that when you measure Register 1
you have a 6.25% chance of either measuring:
0? or 1024 or 2048 or 3072 or 4096 etc until 15360.
Using any of those numbers (except 0) you perform
in Step 6 Continued Fraction Convergent to find periodicity.

My interpretation of the FFT values in step 5 is that
we only have a 16/16384=1/1024 chance of finding the value 1024
and a 1023/1024 chance of finding the value 0.

What is the correct interpretation?
Is my simmulation program correct?

Nick
http://users.pandora.be/nicvroom/
Question 23 Discusses Shor's Algorithm

Nicolaas Vroom

unread,
Apr 23, 2003, 2:12:42 AM4/23/03
to physics_research

Nicolaas Vroom wrote:
>
> My interpretation of the FFT values in step 5 is that
> we only have a 16/16384=1/1024 chance of finding the value 1024
> and a 1023/1024 chance of finding the value 0.
>
> What is the correct interpretation?
> Is my simmulation program correct?
>
> Nick
> http://users.pandora.be/nicvroom/
> Question 23 Discusses Shor's Algorithm

My simulation program is based around the following document:
"Shor's Algorithm for factorizing large numbers"
by G. Eric Moorhouse, UW Math
http://everest.uwyo.edu/~moorhous/quantum/talk2.pdf
In this document Shor's Algorithm is implemented in 6 steps.

In step 4 of this document the state phi is calculated
as a summation of two loops:
an inner loop and an outer loop
The inner loop consists of a summation of
e^iwcd with d going from 0 to 409 (incase n=55)
The outer loop consists of a summation of
e^ixc * "result of innerloop"
with c going from 0 to 8191 (incase n=55)

Inorder to calculate state phi I use both
the cos part and the sin part

As such I combined both in order to calculate
a) the value of the FFT
b) the probability Pr(c)
In fact in order to calculate the value of the FFT
I called the summation of the cos part: tot1
I called the summation of the sin part: tot2

tot1 for the inner loop then becomes:
tot1 = cos(owc)+cos(1wc)+cos(2wc)+cos(3wc) etc.
tot1 can be split up in two parts: even and odd part:
f1c = cos(0wc)+cos(2wc)+cos(4wc)+cos(6wc) etc.
f2c = cos(1wc)+cos(3wc)+cos(5wc)+cos(7wc) etc.
Each of those functions f1c and f2c can be seen
as points on a circle.
When you do that each summation f1c and f2c can be combined
in one line.
For details see:
http://users.pandora.be/nicvroom/shoropt.htm

The same you can also do for tot2
tot2 = sin(owc)+sin(1wc)+sin(2wc)+sin(3wc) etc.

This strategy improves tremendously the calculation
of both the FFT value and or the Pr(c) value.

Is this optimisation allowed ?

Nick.

0 new messages