Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Toaster to Generate Random Numbers

68 views
Skip to first unread message

Omar Bohsali

unread,
Jan 8, 2003, 4:51:36 PM1/8/03
to
Hello.

Is it possible to use thermal noise to generate random numbers.

My idea is the following:

Get a toaster, and measure the noise generated by it. Amplify the noise, and
then have it digitized by a program that will turn it into numbers.

One question still lingers in my plan:

Is thermal noise random?

Some people say that it is, some say that it isn't.

Please enlighten me.

--
Thank You,

Omar Bohsali
http://www.omarbohsali.com
"There are 10 types of people in this world. People who understand Binary,
and people who don't"

fungus

unread,
Jan 8, 2003, 5:02:09 PM1/8/03
to
Omar Bohsali wrote:
> Hello.
>
> Is it possible to use thermal noise to generate random numbers.
>
> My idea is the following:
>
> Get a toaster, and measure the noise generated by it. Amplify the noise, and
> then have it digitized by a program that will turn it into numbers.
>

Is it a good idea to have a toaster turned on 24/7?

I'm sure most toasters will burn out pretty fast
if you try it.


> Please enlighten me.
>

Try this instead: http://www.lavarnd.org/

Any moving image will do - paper streamers
blowing in a fan, feed the result into a
cheap webcam and hash the result.


--
<\___/>
/ O O \
\_____/ FTB.

lurker

unread,
Jan 8, 2003, 5:36:47 PM1/8/03
to

So is this a "think different" type of question? Usually when we
think about gathering entropy from thermal noise we are talkikg about
sillicon chips or diodes.

Casey Schaufler

unread,
Jan 8, 2003, 6:59:08 PM1/8/03
to
Omar Bohsali wrote:
>
> Hello.
>
> Is it possible to use thermal noise to generate random numbers.
>
> My idea is the following:
>
> Get a toaster, and measure the noise generated by it. Amplify the noise, and
> then have it digitized by a program that will turn it into numbers.

Be careful not to step on the LavaRand patients.
Yes, someone has a patient on gathering random
numbers by pointing digital cameras at a set of
lava lamps. Your notion might infringe on that
patient in that it differs only by the wavelength
of radiation measured.

--

Casey Schaufler Manager, Trust Technology, SGI
ca...@sgi.com voice: 650.933.1634
cas...@pager.sgi.com Pager: 877.557.3184

Paul Pires

unread,
Jan 8, 2003, 8:00:00 PM1/8/03
to

Casey Schaufler <ca...@sgi.com> wrote in message news:3E1CBB4C...@sgi.com...

> Omar Bohsali wrote:
> >
> > Hello.
> >
> > Is it possible to use thermal noise to generate random numbers.
> >
> > My idea is the following:
> >
> > Get a toaster, and measure the noise generated by it. Amplify the noise, and
> > then have it digitized by a program that will turn it into numbers.
>
> Be careful not to step on the LavaRand patients.
> Yes, someone has a patient on gathering random
> numbers by pointing digital cameras at a set of
> lava lamps. Your notion might infringe on that
> patient in that it differs only by the wavelength
> of radiation measured.

This is good an example of why spell checkers are the spawn
of the devil. They don't fix errors, they just make them consistent.

Have patience my son.

Paul

Carlos Moreno

unread,
Jan 8, 2003, 8:11:37 PM1/8/03
to

Omar Bohsali wrote:

> Hello.
>
> Is it possible to use thermal noise to generate random numbers.
>
> My idea is the following:
>
> Get a toaster, and measure the noise generated by it. Amplify the noise, and
> then have it digitized by a program that will turn it into numbers.
>
> One question still lingers in my plan:
>
> Is thermal noise random?
>
> Some people say that it is, some say that it isn't.


The discussion is mostly philosophical. Most (sane) people
should agree that thermal noise can be considered random
for all practical purposes from any conceivable point of
view.

Now, the "philosophical" argument could be based on the
doubt if thermal noise is a signal truly unpredictable, or
if it is just that we don't have the capacity to predict it.

In fact, even more philosophically, some could argue that
"randomness" does not exist in nature, and that it is only
a theoretical concept in our minds. Everything is predictable,
only that there are many things for which we don't have the
means or the capacity or the knowledge necessary to predict
it, so we call them "random", "unpredictable".

After all, one could argue that given the *exact* values
for *all* the phisical parameters (speed, position,
electrical charge, etc.) of every single particle or point
of matter one second after the "big bang", then you could
(theoretically speaking) determine the *exact* state of
the universe *at any given time* (i.e., position and
speed of *every* single particle in the universe).

Sad notion, isn't it?

Carlos
--


Ant

unread,
Jan 8, 2003, 8:26:50 PM1/8/03
to
"Paul Pires" <dio...@got.net> wrote in message
news:nyOdnRKzRtJ...@got.net...

>
> Casey Schaufler <ca...@sgi.com> wrote in message
news:3E1CBB4C...@sgi.com...
> > Omar Bohsali wrote:
> > >
> > > Hello.
> > >
> > > Is it possible to use thermal noise to generate random numbers.
> > >
> > > My idea is the following:
> > >
> > > Get a toaster, and measure the noise generated by it. Amplify the noise,
and
> > > then have it digitized by a program that will turn it into numbers.
> >
> > Be careful not to step on the LavaRand patients.
> > Yes, someone has a patient on gathering random
> > numbers by pointing digital cameras at a set of
> > lava lamps. Your notion might infringe on that
> > patient in that it differs only by the wavelength
> > of radiation measured.
>
> This is good an example of why spell checkers are the spawn
> of the devil. They don't fix errors, they just make them consistent.
>
> Have patience my son.
>
> Paul

LOL! I propose the use of newsgroup noise.


John Elsbury

unread,
Jan 8, 2003, 8:35:53 PM1/8/03
to
On Wed, 8 Jan 2003 16:51:36 -0500, "Omar Bohsali"
<omarb...@omarbohsali.com> wrote:

>Hello.
>
>Is it possible to use thermal noise to generate random numbers.

Yes. To get useful results in a decent timeframe, however, you would
have to look at the variability of the temperature of the thermal
source over time, which (I guess) depends to some extent on its mass.
Most implementations I am aware of use noise from a noise diode or
transistor junction operated at somewhere over it's breakdown voltage.
Alternatively you could use any thermionic valve (tube) or a neon,
etc. as a noise source or even something like a geiger counter or
similar particle detector.

I expect, if you do a google search on "schematic random noise
generator" you should get some ideas.
>

<snip wacky toaster idea>

Benjamin Goldberg

unread,
Jan 8, 2003, 8:52:59 PM1/8/03
to
Carlos Moreno wrote:
[snip]

> After all, one could argue that given the *exact* values
> for *all* the phisical parameters (speed, position,
> electrical charge, etc.) of every single particle or point
> of matter one second after the "big bang", then you could
> (theoretically speaking) determine the *exact* state of
> the universe *at any given time* (i.e., position and
> speed of *every* single particle in the universe).
>
> Sad notion, isn't it?

Supposing for a moment that you could build a "universe simulator" to
make this determination -- obviously, you're trying to measure how
things are "now", so you'd have to run the simulator up to the point in
time that we exist -- some problems, and questions arise:

1/ Is possible to run the simulated universe faster than the passage
of time of the actual universe? I suspect not.

2/ Would the simulated humans in the simulated universe be "real
people," with souls as real as our own?

3/ Assuming that there is a God, and that miracles *have* happened
(at least one miracle from at least one holy book), wouldn't that mean
that to make the simulated universe behave the same as our own did, we
would have to create miracles in the simulated universe, precisely the
same as God's miracles in our own real universe?

4/ If everything is determinable, do we have free will?

5/ If we could somehow run the simulator faster than the real
universe, then could we simulate the present and the future? (Keeping
in mind that to simulate the near past and the present, the simulator
would need to be simulating itself!)

--
$..='(?:(?{local$^C=$^C|'.(1<<$_).'})|)'for+a..4;
$..='(?{print+substr"\n !,$^C,1 if $^C<26})(?!)';
$.=~s'!'haktrsreltanPJ,r coeueh"';BEGIN{${"\cH"}
|=(1<<21)}""=~$.;qw(Just another Perl hacker,\n);

Bill Unruh

unread,
Jan 8, 2003, 8:44:58 PM1/8/03
to
"Omar Bohsali" <omarb...@omarbohsali.com> writes:

]Hello.

]Is it possible to use thermal noise to generate random numbers.

]My idea is the following:

]Get a toaster, and measure the noise generated by it. Amplify the noise, and
]then have it digitized by a program that will turn it into numbers.

??? a toaster? What aspect of the toaster do you plan on measuring to
get the noise?

Anyway, run a DC current through a resistor and measure the voltage.
This will give you a variety of noises. Some (1/f) have long time scale
correlations, some (higher frequency) is pretty white in spectrum.


]One question still lingers in my plan:

]Is thermal noise random?

No physical source is "random". They all have biases, correlations, etc.
You can work to get rid of them. Exactly how predictable they make the
noise is a different question. Ie, you might have a correlation which
affects say 1 bit out of 100 if you do not work very hard (ie 99 bits of
"randomness" per 100 bits).

]Some people say that it is, some say that it isn't.

]Please enlighten me.

Alun Jones

unread,
Jan 8, 2003, 8:48:28 PM1/8/03
to
In article <avik6q$cii$1...@nntp.itservices.ubc.ca>, un...@string.physics.ubc.ca (Bill Unruh) wrote:
>??? a toaster? What aspect of the toaster do you plan on measuring to
>get the noise?

Spread sardines on the toast, and you'll have a nice poisson distribution :-)

Alun.
~~~~

[Please don't email posters, if a Usenet response is appropriate.]
--
Texas Imperial Software | Try WFTPD, the Windows FTP Server. Find us at
1602 Harvest Moon Place | http://www.wftpd.com or email al...@texis.com
Cedar Park TX 78613-1419 | VISA/MC accepted. NT-based sites, be sure to
Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for XP/2000/NT.

lurker

unread,
Jan 8, 2003, 8:56:26 PM1/8/03
to

Wouldn't this scenario require a Bell's theorem/many worlds structure
to the cosmos that had every possible branching possibility happening
some where/time?

Andrew Swallow

unread,
Jan 8, 2003, 9:06:06 PM1/8/03
to
"Benjamin Goldberg" <gol...@earthlink.net> wrote in message
news:3E1CD5FB...@earthlink.net...
> Carlos Moreno wrote:
> [snip]
[snip]

>
> 3/ Assuming that there is a God, and that miracles *have* happened
> (at least one miracle from at least one holy book), wouldn't that mean
> that to make the simulated universe behave the same as our own did, we
> would have to create miracles in the simulated universe, precisely the
> same as God's miracles in our own real universe?
>

Sounds like a job for the debug package. Adjust the appropriate variables
and array enrties.

> 4/ If everything is determinable, do we have free will?
>

In computer games players can make the characters go through doors
and climb ladders. The characters cannot go through walls. God
probably developed similar rules for us.

> 5/ If we could somehow run the simulator faster than the real
> universe, then could we simulate the present and the future? (Keeping
> in mind that to simulate the near past and the present, the simulator
> would need to be simulating itself!)
>

Recursive simulations!

Andrew Swallow

Alun Jones

unread,
Jan 8, 2003, 9:23:53 PM1/8/03
to
In article <avij86$q2$1...@newsg3.svr.pol.co.uk>, "Ant" <n...@home.today> wrote:
>LOL! I propose the use of newsgroup noise.

Wouldn't help - it's obviously non-random.

Paul Crowley

unread,
Jan 8, 2003, 10:25:08 PM1/8/03
to
al...@texis.com (Alun Jones) writes:

> In article <avij86$q2$1...@newsg3.svr.pol.co.uk>, "Ant" <n...@home.today> wrote:
> >LOL! I propose the use of newsgroup noise.
>
> Wouldn't help - it's obviously non-random.

I'm not sure of that at all! :-)

(Seriously, the reason not to rely on newsgroup noise in a
cryptographic context is that it's available to your attacker...)
--
__ Paul Crowley
\/ o\ s...@paul.ciphergoth.org
/\__/ http://www.ciphergoth.org/

Joerg Woelke

unread,
Jan 8, 2003, 9:38:43 PM1/8/03
to
Hi!

Carlos Moreno wrote:

[ snip ]

> Now, the "philosophical" argument could be based on the
> doubt if thermal noise is a signal truly unpredictable, or
> if it is just that we don't have the capacity to predict it.

I always thought radioactive decay is random.
"http://www.fourmilab.ch/hotbits/"

> Carlos

Greets, J"o!

--
sigfault

johnekus

unread,
Jan 8, 2003, 11:47:31 PM1/8/03
to
Why would a toaster be any better at deriving random sequences than anyone
else.

Just because a guy can address an audience at a wedding and make a few good
comments about the guests of honor doesn't necessarily make him a better
random sequence generator.

Daaaauuuh!

JK

http://www.crak.com

Home of Gulpit, the packet sniffer for the masses


"Omar Bohsali" <omarb...@omarbohsali.com> wrote in message
news:avi6h7$5tj$1...@bob.news.rcn.net...

johnekus

unread,
Jan 8, 2003, 11:50:34 PM1/8/03
to
I am only going to say this one more time...


The best random number generator is a set of AOL CD_ROMs hanging from
strings in front of colored lights.

You point a fan at the whole deal, use a web cam to sample the colors and
mix all the data using Yarrow.

http://www.crak.com

Home of Gulpit the packet sniffer for the masses.

JK

"Omar Bohsali" <omarb...@omarbohsali.com> wrote in message
news:avi6h7$5tj$1...@bob.news.rcn.net...

Michael Amling

unread,
Jan 9, 2003, 8:22:19 AM1/9/03
to
johnekus wrote:
> The best random number generator is a set of AOL CD_ROMs hanging from
> strings in front of colored lights.
>
> You point a fan at the whole deal, use a web cam to sample the colors and
> mix all the data using Yarrow.

It would be cheaper to just point the fan at a microphone.

--Mike Amling

Mark H. Wood

unread,
Jan 9, 2003, 9:36:54 AM1/9/03
to
In comp.security.misc Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
>
> Omar Bohsali wrote:
>
>> Hello.
>>
>> Is it possible to use thermal noise to generate random numbers.

Intel seems to think so. See the Pentium 4 datasheet.

[snippage]


> Now, the "philosophical" argument could be based on the
> doubt if thermal noise is a signal truly unpredictable, or
> if it is just that we don't have the capacity to predict it.

How, exactly, could we possibly know the difference? (Since we're
being "philosophical" here.)

> In fact, even more philosophically, some could argue that
> "randomness" does not exist in nature, and that it is only
> a theoretical concept in our minds. Everything is predictable,
> only that there are many things for which we don't have the
> means or the capacity or the knowledge necessary to predict
> it, so we call them "random", "unpredictable".

Both Newtonian and relativistic mechanics are founded on this idea.
It works well on large scale. Dynamics is full of results which show
a surprising amount of order in apparently "random" behavior.

> After all, one could argue that given the *exact* values
> for *all* the phisical parameters (speed, position,

Heisenberg argued that you cannot know both concurrently on the small
scale, and he seems to be right.

> electrical charge, etc.) of every single particle or point
> of matter one second after the "big bang", then you could
> (theoretically speaking) determine the *exact* state of
> the universe *at any given time* (i.e., position and
> speed of *every* single particle in the universe).

David Gerrold wrote a story (_When HARLIE Was One_) featuring a
machine which could model the Universe exactly, but the problem was
that it (necessarily) runs in slower than real time. So it turns out
that the best way to know what the Universe is going to do is to watch
it and see. :-P

--
Mark H. Wood, Lead System Programmer mw...@IUPUI.Edu
MS Windows *is* user-friendly, but only for certain values of "user".

Mark H. Wood

unread,
Jan 9, 2003, 9:44:48 AM1/9/03
to
In comp.security.misc Benjamin Goldberg <gol...@earthlink.net> wrote:
[snip]

> Supposing for a moment that you could build a "universe simulator" to
> make this determination -- obviously, you're trying to measure how
> things are "now", so you'd have to run the simulator up to the point in
> time that we exist -- some problems, and questions arise:
>
> 1/ Is possible to run the simulated universe faster than the passage
> of time of the actual universe? I suspect not.
>
> 2/ Would the simulated humans in the simulated universe be "real
> people," with souls as real as our own?

Alternately, what does that say about the reality of souls? or the
meaning of "reality"? (Are you uncomfortable yet?)

> 3/ Assuming that there is a God, and that miracles *have* happened
> (at least one miracle from at least one holy book), wouldn't that mean
> that to make the simulated universe behave the same as our own did, we
> would have to create miracles in the simulated universe, precisely the
> same as God's miracles in our own real universe?

3.1: Does that mean that the person operating the simulation is God
in the simulated universe, since he created it and exercises
intimate control over all aspects of its operation? How do you feel
*now*?

> 4/ If everything is determinable, do we have free will?

If we don't, it doesn't matter since there is really nobody here.

> 5/ If we could somehow run the simulator faster than the real
> universe, then could we simulate the present and the future? (Keeping
> in mind that to simulate the near past and the present, the simulator
> would need to be simulating itself!)

Ah, here we go! See Goedel's Incompleteness Theorem. (There's a
nicely accessible exposition on incompleteness, infinite recursion,
and other tasty aspects of self-reference in Douglas Hofstadter's book
_Goedel, Escher, Bach: an Eternal Golden Braid_. It's also a very
good read.)

Mark H. Wood

unread,
Jan 9, 2003, 10:07:33 AM1/9/03
to
In comp.security.misc Andrew Swallow <am.sw...@eatspam.btinternet.com> wrote:
> "Benjamin Goldberg" <gol...@earthlink.net> wrote in message
> news:3E1CD5FB...@earthlink.net...
> [snip]
>>
>> 3/ Assuming that there is a God, and that miracles *have* happened
>> (at least one miracle from at least one holy book), wouldn't that mean
>> that to make the simulated universe behave the same as our own did, we
>> would have to create miracles in the simulated universe, precisely the
>> same as God's miracles in our own real universe?
>>
>
> Sounds like a job for the debug package. Adjust the appropriate variables
> and array enrties.

Ooh, see Diane Duane's "Young Wizards" books, with magicians running
around tweaking the "kernels" of various universes. (_The Wizard's
Dilemma_ especially, but start with _So You Want to Be a Wizard_ or
you'll be missing some background material.)

Barry Margolin

unread,
Jan 9, 2003, 10:33:40 AM1/9/03
to
In article <3E1CCC49...@xx.xxx>,

Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
>The discussion is mostly philosophical. Most (sane) people
>should agree that thermal noise can be considered random
>for all practical purposes from any conceivable point of
>view.

For crypto purposes, the issue isn't whether it's random, but whether it's
"random enough" -- i.e. are there enough random bits to be useful in
seeding an RNG? This depends, of course, on how precisely we're able to
measure the temperature -- the most randomness is in the low-order bits, so
the more precisely we can measure, the more low-order bits we have. But
more precise measurements also entails more expensive equipment, so there
will be a tradeoff between crypto strength and cost.

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Casey Schaufler

unread,
Jan 9, 2003, 12:08:17 PM1/9/03
to
Paul Pires wrote:
>
> Casey Schaufler <ca...@sgi.com> wrote in message news:3E1CBB4C...@sgi.com...
> > Omar Bohsali wrote:
> > >
> > > Hello.
> > >
> > > Is it possible to use thermal noise to generate random numbers.
> > >
> > > My idea is the following:
> > >
> > > Get a toaster, and measure the noise generated by it. Amplify the noise, and
> > > then have it digitized by a program that will turn it into numbers.
> >
> > Be careful not to step on the LavaRand patients.
> > Yes, someone has a patient on gathering random
> > numbers by pointing digital cameras at a set of
> > lava lamps. Your notion might infringe on that
> > patient in that it differs only by the wavelength
> > of radiation measured.
>
> This is good an example of why spell checkers are the spawn
> of the devil. They don't fix errors, they just make them consistent.
>
> Have patience my son.

As it turns out my spelling of any given word has been shown to
be sufficiently random as to qualify as a cryptographicly strong
random number seed. At least I used real words this time!

For some time the Amdahl Unix (remember them?) man page for
ispell was noted as "dedacaded to Casey Schaufler". Good fun.

lurker

unread,
Jan 9, 2003, 12:36:53 PM1/9/03
to

Since detection of random noise is a serial process shouldn't you use
more than one toaster/detector combo to flatten the distribution?
>

Bill Unruh

unread,
Jan 9, 2003, 12:42:59 PM1/9/03
to
Carlos Moreno <moreno_at_mo...@xx.xxx> writes:


]Omar Bohsali wrote:

]The discussion is mostly philosophical. Most (sane) people


]should agree that thermal noise can be considered random
]for all practical purposes from any conceivable point of
]view.

No, it is not. The noise structure of "thermal noise"-- eg the noise
coming from a resistor with a constant current source across it, has
correlations, especially at long times (1/f noise). Furthermore, stray
capacitances, inductances, etc, also introduce correlations into the
noise. All physical systems have such correlations. Some are well
understood, some not. Such correlations mean that the source is NOT
"random" (ie, uncorrelated white noise-- or each bit value equal
probablility 1 and 0 and no correlations between bits).


Scott Nelson

unread,
Jan 9, 2003, 1:03:53 PM1/9/03
to
On Wed, 8 Jan 2003 16:51:36 -0500, "Omar Bohsali"
<omarb...@omarbohsali.com> wrote:

>Hello.
>
>Is it possible to use thermal noise to generate random numbers.
>

>My idea is the following:
>
>Get a toaster, and measure the noise generated by it. Amplify the noise, and
>then have it digitized by a program that will turn it into numbers.
>

>One question still lingers in my plan:
>
>Is thermal noise random?
>
>Some people say that it is, some say that it isn't.
>
>Please enlighten me.

Your first step on the path to enlightenment:
DON'T CROSS POST THIS KIND OF QUESTION.

All noise is in a sense random, that's why we call it "noise".

Our best physics models say that heat is the result of
molecular motion, and molecular motion is dependant on
particles which are subject to quantum uncertainties.
Thermal noise is therefore subject to quantum effects,
_if you attempt to measure it with enough precision_.

But most thermal detectors measure collections of molecules,
and the collections exceed 1,000,000,000,000,000,000 in number.
At that scale, truly unpredictable results are rare.

But as long as one is willing to measure the temperature
for a long time, and distill the information down sufficiently,
it's possible to get high quality randomness, even with a
toaster and a glass thermometer.


Scott Nelson <sc...@helsbreth.org>

Douglas A. Gwyn

unread,
Jan 9, 2003, 12:18:56 PM1/9/03
to
Guy Macon wrote:
> If I put it anywhere in the universe, I have to update the
> universe simulator to simulate a universe with a universe
> simulator in it. Then I have to update the universe
> simulator to simulate a universe with a universe simulator
> that simulates a universe with a universe simulator in it.

Why not simulate a universe containing the contrary
simulator.

Mok-Kong Shen

unread,
Jan 9, 2003, 1:50:31 PM1/9/03
to

Do you think that with unbiasing and further xor-ing
a sufficient number of such sequences is a satisfactory
solution?

M. K. Shen

Bill Unruh

unread,
Jan 9, 2003, 2:36:14 PM1/9/03
to
Mok-Kong Shen <mok-ko...@t-online.de> writes:

Maybe. The original may well be good enough already ( are you really
worried if the effective entropy of 100 bits is really only that of 99
bits? )
Biasing is an easy one to handle. Other correlations can be more
difficult to detect and eliminate. If you understand your noise stream,
then correcting it for such correlations is not hard. It is the
understanding that can be hard.

]M. K. Shen

Alun Jones

unread,
Jan 9, 2003, 3:56:03 PM1/9/03
to
In article <87hecjg...@saltationism.subnet.hedonism.cluefactory.org.uk>,
Paul Crowley <pa...@JUNKCATCHER.ciphergoth.org> wrote:
>al...@texis.com (Alun Jones) writes:
>
>> In article <avij86$q2$1...@newsg3.svr.pol.co.uk>, "Ant" <n...@home.today> wrote:
>> >LOL! I propose the use of newsgroup noise.
>>
>> Wouldn't help - it's obviously non-random.
>
>I'm not sure of that at all! :-)
>
>(Seriously, the reason not to rely on newsgroup noise in a
>cryptographic context is that it's available to your attacker...)

Although not reliably, and not necessarily in the same order as your own.

The non-randomness comes from a few things:

1. Spam, which can be relied upon to repeat itself hugely.
2. The use of English language almost exclusively.
3. Headers and other formatting items that cause repeatable patterns.

It's possible that you could strip out much of the non-random portions of
Usenet, but I can't see it as something you could reliably use, even for an
application where randomness is required, but you don't care whether an
outsider gets the same random data as you.

Barry Margolin

unread,
Jan 9, 2003, 4:38:39 PM1/9/03
to
In article <DllT9.305$NT1.12...@newssvr11.news.prodigy.com>,

Alun Jones <al...@texis.com> wrote:
>In article <87hecjg...@saltationism.subnet.hedonism.cluefactory.org.uk>,
>Paul Crowley <pa...@JUNKCATCHER.ciphergoth.org> wrote:
>>al...@texis.com (Alun Jones) writes:
>>
>>> In article <avij86$q2$1...@newsg3.svr.pol.co.uk>, "Ant" <n...@home.today> wrote:
>>> >LOL! I propose the use of newsgroup noise.
>>>
>>> Wouldn't help - it's obviously non-random.
>>
>>I'm not sure of that at all! :-)
>>
>>(Seriously, the reason not to rely on newsgroup noise in a
>>cryptographic context is that it's available to your attacker...)
>
>Although not reliably, and not necessarily in the same order as your own.
>
>The non-randomness comes from a few things:
>
>1. Spam, which can be relied upon to repeat itself hugely.
>2. The use of English language almost exclusively.
>3. Headers and other formatting items that cause repeatable patterns.
>
>It's possible that you could strip out much of the non-random portions of

Maybe the result of compressing the posts would be a better seed, as
compression should remove much of the redundancy.

>Usenet, but I can't see it as something you could reliably use, even for an
>application where randomness is required, but you don't care whether an
>outsider gets the same random data as you.

Although the outsider has access to all the messages, he doesn't know which
messages you digested and in which order you scanned them to produce your
random seed. Also, there's quite a bit of server-specific data, such as
article numbers, and Path and Xref headers, so unless he's using the same
news server as you are he won't get the same results.

Walter Roberson

unread,
Jan 9, 2003, 4:56:12 PM1/9/03
to
In article <3e1db2a1...@netnews.worldnet.att.net>,
lurker <n...@nospam.org> wrote:
:Since detection of random noise is a serial process shouldn't you use

:more than one toaster/detector combo to flatten the distribution?

As we are talking about toasters, detection of random noise
would be a *cereal* process. To change the distribution, use
a different grain of bread ;-)
--
Rump-Titty-Titty-Tum-TAH-Tee -- Fritz Lieber

lurker

unread,
Jan 9, 2003, 6:11:08 PM1/9/03
to
On 9 Jan 2003 21:56:12 GMT, robe...@ibd.nrc-cnrc.gc.ca (Walter
Roberson) wrote:

Can you substitute hash browns for bread?

Guy Macon

unread,
Jan 9, 2003, 7:35:55 PM1/9/03
to


Barry Margolin wrote:

>For crypto purposes, the issue isn't whether it's random, but whether it's
>"random enough" -- i.e. are there enough random bits to be useful in
>seeding an RNG? This depends, of course, on how precisely we're able to
>measure the temperature -- the most randomness is in the low-order bits, so
>the more precisely we can measure, the more low-order bits we have. But
>more precise measurements also entails more expensive equipment, so there
>will be a tradeoff between crypto strength and cost.

While it is true that more precision entails more expensive equipment,
Using an analog high-pass filter and amplifier will cheaply move those
low order bits higher. It's not too hard to amplify and measure the
noise if you don't mind losing the signal on the way.

--
Email Guy Macon guymacon+YOUR NAME GOES HE...@spamcop.net <html><head></head>
<body><a href="http://www.guymacon.com/resume.html" >Electrical engineer</a>
for hire: Los Angeles / Orange County CA USA 714-670-1687 See my resume at
http://www.guymacon.com/resume.html .</body><html><!-- www.guymacon.com -->

Guy Macon

unread,
Jan 9, 2003, 7:44:39 PM1/9/03
to


Bill Unruh wrote:

> The noise structure of "thermal noise"-- eg the noise
> coming from a resistor with a constant current source across it, has
> correlations, especially at long times (1/f noise).

There is a problem with the theory that resistors have 1/f noise.
What is the amplitude of the signal at DC? At very, very close
to DC? Are the real-world answers really infinty and very, very
large?

Of course it would take forever to answer the first question with
an experiment and a very, very long time to answer the second.

Guy Macon

unread,
Jan 9, 2003, 7:49:56 PM1/9/03
to


Barry Margolin wrote:

>Although the outsider has access to all the messages, he doesn't know which
>messages you digested and in which order you scanned them to produce your
>random seed. Also, there's quite a bit of server-specific data, such as
>article numbers, and Path and Xref headers, so unless he's using the same
>news server as you are he won't get the same results.

In other word, you randomly picked a newsserver and randomly picked
the articles to process, keeping those decisions secret. Have you
really increased the entropy over that of the RNG you used to do
the choosing?

Barry Margolin

unread,
Jan 9, 2003, 8:27:19 PM1/9/03
to
In article <v1s65ur...@corp.supernews.com>,

Guy Macon <. http://www.guymacon.com/resume.html .> wrote:
>Barry Margolin wrote:
>
>>Although the outsider has access to all the messages, he doesn't know which
>>messages you digested and in which order you scanned them to produce your
>>random seed. Also, there's quite a bit of server-specific data, such as
>>article numbers, and Path and Xref headers, so unless he's using the same
>>news server as you are he won't get the same results.
>
>In other word, you randomly picked a newsserver and randomly picked
>the articles to process, keeping those decisions secret. Have you
>really increased the entropy over that of the RNG you used to do
>the choosing?

I think so.

Suppose you have an RNG that chooses a random integer from 1 to 10. You
keep the past 10 days of newspapers, and use this number to select which of
them to digest to seed your random number sequence.

At first glance this doesn't seem to be any better -- you're just selecting
among 10 random number sequences, which you could just as easily do with
the original integer. But the difference is that the function changes
every day. If the simple RNG produces 1 today and 1 tomorrow, you'll get
different sequences.

I admit that I'm not a mathematician and I haven't done a detailed
analysis. The above example is really simple and has flaws (e.g. 1 today
is the same as 2 tomorrow), but I expect that it could be improved upon to
produce good results (e.g. when a newspaper is used, take it out of the set
so that it won't be used again).

Paul Crowley

unread,
Jan 9, 2003, 9:25:05 PM1/9/03
to
al...@texis.com (Alun Jones) writes:
> The non-randomness comes from a few things:
>
> 1. Spam, which can be relied upon to repeat itself hugely.
> 2. The use of English language almost exclusively.
> 3. Headers and other formatting items that cause repeatable patterns.
>
> It's possible that you could strip out much of the non-random portions of
> Usenet, but I can't see it as something you could reliably use, even for an
> application where randomness is required, but you don't care whether an
> outsider gets the same random data as you.

You don't need to strip out the non-random portions. Just estimate
the entropy conservatively and hash the lot, repetition and all. See
the Yarrow paper for a discussion...

Phil Fites

unread,
Jan 9, 2003, 6:37:37 AM1/9/03
to
No, no, no. Everyone knows it was a really hot cup of black tea
that provided the randomness that led to the Infinite
Improbability Drive... :-)

Omar Bohsali wrote:

> Hello.
>
> Is it possible to use thermal noise to generate random numbers.
>
> My idea is the following:
>
> Get a toaster, and measure the noise generated by it. Amplify the noise, and
> then have it digitized by a program that will turn it into numbers.
>
> One question still lingers in my plan:
>
> Is thermal noise random?
>
> Some people say that it is, some say that it isn't.
>
> Please enlighten me.
>

> --


> Thank You,
>
> Omar Bohsali
> http://www.omarbohsali.com

Carlos Moreno

unread,
Jan 10, 2003, 10:41:11 AM1/10/03
to

Benjamin Goldberg wrote:

> Carlos Moreno wrote:
> [snip]


>
>>After all, one could argue that given the *exact* values
>>for *all* the phisical parameters (speed, position,

>>electrical charge, etc.) of every single particle or point
>>of matter one second after the "big bang", then you could
>>(theoretically speaking) determine the *exact* state of
>>the universe *at any given time* (i.e., position and
>>speed of *every* single particle in the universe).
>>

>>Sad notion, isn't it?


>>
>
> Supposing for a moment that you could build a "universe simulator" to
> make this determination -- obviously, you're trying to measure how
> things are "now", so you'd have to run the simulator up to the point in
> time that we exist -- some problems, and questions arise:
>
> 1/ Is possible to run the simulated universe faster than the passage
> of time of the actual universe? I suspect not.
>
> 2/ Would the simulated humans in the simulated universe be "real
> people," with souls as real as our own?
>

> 3/ Assuming that there is a God, and that miracles *have* happened
> (at least one miracle from at least one holy book), wouldn't that mean
> that to make the simulated universe behave the same as our own did, we
> would have to create miracles in the simulated universe, precisely the
> same as God's miracles in our own real universe?
>

> 4/ If everything is determinable, do we have free will?
>

> 5/ If we could somehow run the simulator faster than the real
> universe, then could we simulate the present and the future? (Keeping
> in mind that to simulate the near past and the present, the simulator
> would need to be simulating itself!)


Well, you kind of agree with me in that this is a *philosophical*
debate, and not a scientific/practical one :-)

A few comments:

I'm not talking about building a machine that will calculate
everything; I'mtalking about *the calculability* itself;
yes, we can not possibly measure every phisical parameter
of every single particle and/or point of matter of the
Universe. But assuming that, at a given time, every single
particle of the Universe *does have* a given value for each
of those parameters, then the philosophical question is:
would those values be sufficient information to determine
*everything* about every single particle of the universe at
*any given time*? (past or future)

From the "purely scientific" (i.e., atheist :-)) point
of view, I would argue "yes" (or at least, I would say
"I tend to believe yes until proven the contrary" -- but
since the contrary can not be proven -- neither this
argument -- then I'll stick to my "philosophical" belief
that it is (would be) possible).

Heisenberg said that the exact position and velocity
can not be known at the same time. *Even if that were
true* (which I don't believe for a second that it is;
after all, Newton said that F=m*a, and someone in the
late 1800s said "Heavier-than-air flying machines are
impossible to build", etc.), that [Heisenberg's principle]
doesn't necessarily mean that the position and speed
do not have certain exact values at a given time; the
way I see it, it means that there is no way for us to
determine those (exact) values (since determining
them would imply that some other particle has to
interact with them, and thus affect the values).


As for your point 4... Why is it so hard to believe
that we do not really have free will?? Free will may
be an extremely abstract concept that lives only in
our "cognitive" ways of perceiving the universe...

I mean, after all, the exact position and state of
my body and my mind 1 minute from now will be one
and only one. Whatever it is, *it will be* that
particular *one* set of values; not two, not three
(yeah, we could talk about three possibilities; but
only one will happen -- the one that happens). That
I can not know it right now, that's a different thing.
And that part of it will involve all of the molecules
interactions that happened inside my brain to take
the decisions that will lead to that state [that we
call "free will"], that's another thing -- but it is
also part of that future state. We want to call it
free will, because that's how we perceive it. We
make a decision, and we cause the future to go one
way or another -- but is that decision the cause?
Or is it the effect? (of the state of the zillions
of molecules of our brain, that had a given position
and speed and parameters which are deterministically
going to produce a certain chain of events)

Look at it from this point of view... Let's define
a truly random process as it is "mathematically"
defined. So, there is the random process, and there
are particular realizations of that process... Would
you say that a particular realization of that process
is a random process?? I would say it's not. It is
one particular realization (that from the practical
point of view you could call it random because one
might not have sufficient information or capacity
to predict things, that's another thing).

So, you could say that "reality" (i.e., the current
state of the entire universe) as a function of time
*is one and only one* realization of the presumably
random process that is "reality".... So, what's
truly random about it??

Sure, from our (practical) point of view, the future
does have randomnes, given the ridiculously impossible
that would be to predict it by "calculating" things
based on the position and speed of every single
particle (we don't even need to go as far as to
prove that building such universe simulator is
impossible :-)).

Yes, all the above discussion assumes that the
Universe *does follow* certain rules (call them
Physics, mathematics, etc.), and that those rules
have no exception (regardless of whether or not
we know those rules, or if it is possible at all
to know/understand those rules given the structure
and the capacity of our brains/minds). So, in other
words, I'm arguing that all the above discussion
makes sense only if there is no God, or if there
is, then that God(s) created those "rules" and will
unconditionally stick to the "non violation" of
those rules. (the "miracles" that religions talk
about *may* be based on the rules of the universe,
BTW :-) -- how do you define a "miracle"? I
would define it as something that my current
knowledge about the rules of the universe does
not allow me to understand it :-))

Carlos Moreno

unread,
Jan 10, 2003, 10:52:39 AM1/10/03
to

Bill Unruh wrote:


I think you have a misconception here... (though I wonder if it
is me who has the misconception).

To me, the definition of "random" (if there is one), involves
*only* unpredictability and lack of any fixed pattern.

Uniformly distributed and uncorrelated are different things
(they imply stronger requirements).

In other words, having a higher probability of taking one
particular value still doesn't make it predictable. (i.e.,
you still can not predict if three consecutive coin tosses
will produce at least one head -- you could systematically
state it as your "prediction", and you will be right more
often than wrong... But you did not *predict* the outcome,
and the fact that you get three tails or not *is still* a
random variable).

Now, of course, for many applications (including particularly
cryptography), non-uniformly distributed random sources are
useless (unless you can process them and extract as much
uniformly-distributed data as possible). But that's a
practical consideration; despite thermal noise exhibiting
certain correlation characteristics, *it is* still random
(well, philosophical discussions aside :-)). And of course,
as you say, in most practical cases, the correlation exhibited
by thermal noise is almost sure below the practical limit
of observability, and/or would not affect the system for
which we're using it on.

Carlos
--

Barry Margolin

unread,
Jan 10, 2003, 11:01:34 AM1/10/03
to
In article <3E1EE997...@xx.xxx>,

Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
>I mean, after all, the exact position and state of
>my body and my mind 1 minute from now will be one
>and only one. Whatever it is, *it will be* that
>particular *one* set of values; not two, not three
>(yeah, we could talk about three possibilities; but
>only one will happen -- the one that happens).

The Many-Universes interpretation of Quantum Mechanics is based on the idea
that *all* the possibilities happen. At each decision point, the universe
splits up into different universes, each with different results.

So Schrodinger's Cat really is both dead and alive, but in different
universes. Opening the box allows you to find out which universe you're
in. But meanwhile, in one of the other universes, the other you is opening
the box and finding out something different.

Barry Margolin

unread,
Jan 10, 2003, 11:17:36 AM1/10/03
to
In article <3E1EEC47...@xx.xxx>,

Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
>Now, of course, for many applications (including particularly
>cryptography), non-uniformly distributed random sources are
>useless (unless you can process them and extract as much
>uniformly-distributed data as possible).

If you know the nature of the correlation, it seems like it should not be
too difficult to create an algorithm to remove that component of the raw
data to produce a random stream with the properties you want.

Also, I intuitively expect that these correlations are minimized if you
measure precisely enough and only use the low-order bits. E.g. firefly
flashes and women's periods are known to sync up, but I presume it's only
at low-precision measurements; if you measure fireflies to .1-second
precision and women living together to 1-day preciseion, high correlations
will be seen, but not if you measure them to the microsecond and use the
lowest 2 decimal places.

I suppose it's possible that some of these correlations take place at the
quantum level, so that they'll appear at all precisions that we're able to
measure.

Alun Jones

unread,
Jan 10, 2003, 11:30:44 AM1/10/03
to
In article <3E1D5F6F...@rogers.com>, Phil Fites <fi...@rogers.com> wrote:
>No, no, no. Everyone knows it was a really hot cup of black tea
>that provided the randomness that led to the Infinite
>Improbability Drive... :-)

Strictly speaking, the cup of hot tea was only a convenient generator of
brownean motion. Any observable brownean motion source could presumably do
the same. Going back to the toaster, you could watch the dust motes as they
fly into and out of the turbulent stream of hot air above the toaster.

How long before someone adds the concept of a "self-winding watch" and
smart-card, and produces a smart-card whose randomness is generated by however
many 'jigs' or 'jogs' it gets in the carrier's pocket? Absolutely no use for
a server, but great for a personal identification card!

Carlos Moreno

unread,
Jan 10, 2003, 11:46:14 AM1/10/03
to

Carlos Moreno wrote:

>
> [...]


Ooops, how rude of me! I didn't even sign the
message! :-)

Cheers,

Carlos
--


Barry Margolin

unread,
Jan 10, 2003, 12:01:18 PM1/10/03
to
In article <UyCT9.110$Bh3.13...@newssvr12.news.prodigy.com>,

Alun Jones <al...@texis.com> wrote:
>How long before someone adds the concept of a "self-winding watch" and
>smart-card, and produces a smart-card whose randomness is generated by however
>many 'jigs' or 'jogs' it gets in the carrier's pocket? Absolutely no use for
>a server, but great for a personal identification card!

Sounds like a neat idea. Quick, apply for the patent!

If the card has a keypad for entering a PIN (like SecurID cards), perhaps
it could instead incorporate pressure sensitivity. The high-order bits
could be used as a biometric authenticator (users are probably pretty
consistent about how hard they press the keys, and differ enough for it to
be a useful authenticator when combined with the knowledge of the PIN),
while the low-order bits could be used by the RNG.

Jonathan Day

unread,
Jan 10, 2003, 12:13:45 PM1/10/03
to
"Omar Bohsali" <omarb...@omarbohsali.com> wrote in message news:<avi6h7$5tj$1...@bob.news.rcn.net>...

> Hello.
>
> Is it possible to use thermal noise to generate random numbers.

Yes, it is. The most trivial way to demonstrate this is to get a
highly precice Analog to Digital Converter and connect it to
nothing. No input signal at all. The drift on the lowest bits are
caused by a mix of thermal noise and voltage instability.

The function to estimate this noise is:

(Forecasted weather * Actual Weather) % (Latest Opinion Poll)

Bill Unruh

unread,
Jan 10, 2003, 1:24:16 PM1/10/03
to
Barry Margolin <bar...@genuity.net> writes:

]In article <v1s65ur...@corp.supernews.com>,

]I think so.

So, you say that the randomness is not just the integers 1 to 10 but
also which day you apply it to. Of course narrowing down the day is not
that hard, so the randomness added by "which day" is not that great.
Ie, this is not a randomness amplification process.

Barry Margolin

unread,
Jan 10, 2003, 1:31:47 PM1/10/03
to
In article <avn34g$o9c$1...@nntp.itservices.ubc.ca>,

Bill Unruh <un...@string.physics.ubc.ca> wrote:
>So, you say that the randomness is not just the integers 1 to 10 but
>also which day you apply it to. Of course narrowing down the day is not
>that hard, so the randomness added by "which day" is not that great.
>Ie, this is not a randomness amplification process.

But in the case of using the news spool as the noise source, there's also
the fact that every site's news spool is different (so the perpetrator
would have to have access to your news spool), and constantly changing as
news flows in (so he would have to know the precise moment that you took
the snapshot).

Bill Unruh

unread,
Jan 10, 2003, 1:33:50 PM1/10/03
to
Carlos Moreno <moreno_at_mo...@xx.xxx> writes:


]Bill Unruh wrote:

]> Carlos Moreno <moreno_at_mo...@xx.xxx> writes:
]>
]> ]Omar Bohsali wrote:
]>
]> ]The discussion is mostly philosophical. Most (sane) people
]> ]should agree that thermal noise can be considered random
]> ]for all practical purposes from any conceivable point of
]> ]view.
]>
]> No, it is not. The noise structure of "thermal noise"-- eg the noise
]> coming from a resistor with a constant current source across it, has
]> correlations, especially at long times (1/f noise). Furthermore, stray
]> capacitances, inductances, etc, also introduce correlations into the
]> noise. All physical systems have such correlations. Some are well
]> understood, some not. Such correlations mean that the source is NOT
]> "random" (ie, uncorrelated white noise-- or each bit value equal
]> probablility 1 and 0 and no correlations between bits).


]I think you have a misconception here... (though I wonder if it
]is me who has the misconception).

]To me, the definition of "random" (if there is one), involves
]*only* unpredictability and lack of any fixed pattern.

Correlations imply predictability. If your height is correlated with
your weight, knowing your height I can predict your weight with better
than "random".


]Uniformly distributed and uncorrelated are different things
](they imply stronger requirements).

No, if it is not uniformly distributed (ie biased) then you can use that
information to predict with better than random certainty wha tthe next
number is. If you have correlations you can use them to make
predictions as well.

]In other words, having a higher probability of taking one


]particular value still doesn't make it predictable. (i.e.,
]you still can not predict if three consecutive coin tosses
]will produce at least one head -- you could systematically
]state it as your "prediction", and you will be right more
]often than wrong... But you did not *predict* the outcome,
]and the fact that you get three tails or not *is still* a
]random variable).

The question is not complete confidence, the question is whether the
procedure is better than "exhaustive search". By predictability in a
cryptographic sense is not meant deterministic predictability, but
probabilistic is also good. If a cryptographer could show that he could
use the last 10 entries in an RC4 stream to change the probablility of
the next output from completely random, this would be a form or "break"
of RC4. Of course the stronger the correlation the more predictable. If
the probablilities are only changed from uniform to say a 1/1000 bias,
this may not matter much, but a cryptographer would worry about it.
oAll of the "card counting " schemes to beat Las Vegas only change the
probabilities by a tiny amount, but they are enough to make you rich.

]Now, of course, for many applications (including particularly


]cryptography), non-uniformly distributed random sources are
]useless (unless you can process them and extract as much
]uniformly-distributed data as possible). But that's a
]practical consideration; despite thermal noise exhibiting
]certain correlation characteristics, *it is* still random
](well, philosophical discussions aside :-)). And of course,

Sorry at what level of correlation does it cease being "random" in your
definition. If only one bit in 10^12 is unpredictable is the system
still random?


]as you say, in most practical cases, the correlation exhibited


]by thermal noise is almost sure below the practical limit
]of observability, and/or would not affect the system for
]which we're using it on.

Probably true, and if you know about it, you can distill randomness out
of a correlated system. But you need to know about it.

]Carlos
]--

Bill Unruh

unread,
Jan 10, 2003, 1:37:57 PM1/10/03
to
Barry Margolin <bar...@genuity.net> writes:

]In article <3E1EEC47...@xx.xxx>,


]Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
]>Now, of course, for many applications (including particularly
]>cryptography), non-uniformly distributed random sources are
]>useless (unless you can process them and extract as much
]>uniformly-distributed data as possible).

]If you know the nature of the correlation, it seems like it should not be
]too difficult to create an algorithm to remove that component of the raw
]data to produce a random stream with the properties you want.

Sure, if you know them. The problem is people being told "A is random"
and not knowing about the biases and correlations.


]Also, I intuitively expect that these correlations are minimized if you


]measure precisely enough and only use the low-order bits. E.g. firefly
]flashes and women's periods are known to sync up, but I presume it's only
]at low-precision measurements; if you measure fireflies to .1-second
]precision and women living together to 1-day preciseion, high correlations
]will be seen, but not if you measure them to the microsecond and use the
]lowest 2 decimal places.

Actually I would not trust the low order bits in a piece of measuring
apparatus, since they could well be correlated due to the nature of the
measuring apparatus.

Know your source, and use it wisely -- this is about the best I would
say.

]I suppose it's possible that some of these correlations take place at the

Bill Unruh

unread,
Jan 10, 2003, 1:40:09 PM1/10/03
to
Guy Macon <. http://www.guymacon.com/resume.html .> writes:


]Bill Unruh wrote:

]> The noise structure of "thermal noise"-- eg the noise
]> coming from a resistor with a constant current source across it, has
]> correlations, especially at long times (1/f noise).

]There is a problem with the theory that resistors have 1/f noise.
]What is the amplitude of the signal at DC? At very, very close
]to DC? Are the real-world answers really infinty and very, very
]large?

]Of course it would take forever to answer the first question with
]an experiment and a very, very long time to answer the second.


1/f noise is very very poorly understood. The experimental fact is that
almost all systems which have been measured for a "very very " long
time, show 1/f noise, and there is no indication that it disappears at
"very very very" long times.

Barry Margolin

unread,
Jan 10, 2003, 2:25:11 PM1/10/03
to
In article <avn3u5$ojb$1...@nntp.itservices.ubc.ca>,

Bill Unruh <un...@string.physics.ubc.ca> wrote:
>Barry Margolin <bar...@genuity.net> writes:
>
>]In article <3E1EEC47...@xx.xxx>,
>]Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
>]>Now, of course, for many applications (including particularly
>]>cryptography), non-uniformly distributed random sources are
>]>useless (unless you can process them and extract as much
>]>uniformly-distributed data as possible).
>
>]If you know the nature of the correlation, it seems like it should not be
>]too difficult to create an algorithm to remove that component of the raw
>]data to produce a random stream with the properties you want.
>
>Sure, if you know them. The problem is people being told "A is random"
>and not knowing about the biases and correlations.

There have been several messages saying that thermal noise is 1/f. Well, I
don't know what that means, but I assumed it was the kind of knowledge that
would be useful in filtering out the correlation.

>Actually I would not trust the low order bits in a piece of measuring
>apparatus, since they could well be correlated due to the nature of the
>measuring apparatus.

Good point. And I presume the same may be true for an amplifier (one of
the other posts mentioned that you can get the low-order bits of a
measurement using a high-pass filter and then an amplifier).

Barry Margolin

unread,
Jan 10, 2003, 2:20:43 PM1/10/03
to
In article <avn3me$ohg$1...@nntp.itservices.ubc.ca>,

Bill Unruh <un...@string.physics.ubc.ca> wrote:
>oAll of the "card counting " schemes to beat Las Vegas only change the
>probabilities by a tiny amount, but they are enough to make you rich.

That's only because the house's advantage to begin with was tiny. But if
the house has a huge advantage, then adjusting the probabilities by a tiny
amount would still make the house a winner, but just by a smaller amount.

A simple Caesar cipher is like casino odds. All you need is a little edge,
like a table of letter and digraph frequencies (e.g. ETAOIN SHRDLU for
English), and you can easily crack the messages.

But most real codes are much harder than that, analogous to a casino with
huge house odds, aren't they? If you discover a bias that reduces cracking
time by 10%, but typical cracking time is 10 months, then it's *still* 9
months, which is OK. Unless the bias reduces the strength of the crypto
scheme by an order of magnitude, it's probably not a real problem.

Paul Crowley

unread,
Jan 10, 2003, 4:25:09 PM1/10/03
to
Barry Margolin <bar...@genuity.net> writes:
> There have been several messages saying that thermal noise is 1/f. Well, I
> don't know what that means, but I assumed it was the kind of knowledge that
> would be useful in filtering out the correlation.

There's no need to filter out the correlation. Just make a
conservative estimate of the entropy and feed all the raw data
straight to Yarrow:

http://www.counterpane.com/yarrow.html

Bill Unruh

unread,
Jan 10, 2003, 5:19:54 PM1/10/03
to
Barry Margolin <bar...@genuity.net> writes:

]In article <avn3me$ohg$1...@nntp.itservices.ubc.ca>,


]Bill Unruh <un...@string.physics.ubc.ca> wrote:
]>oAll of the "card counting " schemes to beat Las Vegas only change the
]>probabilities by a tiny amount, but they are enough to make you rich.

]That's only because the house's advantage to begin with was tiny. But if
]the house has a huge advantage, then adjusting the probabilities by a tiny
]amount would still make the house a winner, but just by a smaller amount.

]A simple Caesar cipher is like casino odds. All you need is a little edge,
]like a table of letter and digraph frequencies (e.g. ETAOIN SHRDLU for
]English), and you can easily crack the messages.

]But most real codes are much harder than that, analogous to a casino with
]huge house odds, aren't they? If you discover a bias that reduces cracking
]time by 10%, but typical cracking time is 10 months, then it's *still* 9
]months, which is OK. Unless the bias reduces the strength of the crypto
]scheme by an order of magnitude, it's probably not a real problem.


Sure. The question was whether physical process X was a good source of
random numbers. I have no idea what he wants them for. If it is to
decide whether to have eggs or ham for breakfast, then using his toaster
to decide is fine, no matter what the biases are. If he wants to use it
to design an online high volume betting game, the demands may well be
much more stringent. To decide he has to know how to balance the
possible correlations in the physical process and their effect on his
random stream against his use of that that stream. If he thinks he can
just plug his computer into his toaster and get a completely
unpredictable stream, then he is wrong. That does not mean it is not
good enough for what he wants. After all RC4 is terrible non-random
(only say 256 bits of randomness even in an output of 10^5 bits), but
it is good enough to hide secrets for most applications.
The correlations in physical sources tend to be much more linear than
the correlations in say RC4. Running the physical system through MD5
say, will make those linear correlations highly non-linear, and very
hard to use, and may even remove them altogether ( randomness
distillation).

I would suspect that measuing the noise out of a resistor with a
constant current source driving it, and then feeding the bits through
say MD5 with a 2-1 compression (ie 256 bits in for each 128 bits out)
would be wonderfully random for all applications-- until of course
someone decided to use only the high order bit of the voltage from the
resistor, with a perfect correlation over time.

Bill Unruh

unread,
Jan 10, 2003, 5:25:33 PM1/10/03
to
Barry Margolin <bar...@genuity.net> writes:

]In article <avn3u5$ojb$1...@nntp.itservices.ubc.ca>,


]Bill Unruh <un...@string.physics.ubc.ca> wrote:
]>Barry Margolin <bar...@genuity.net> writes:
]>
]>]In article <3E1EEC47...@xx.xxx>,
]>]Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
]>]>Now, of course, for many applications (including particularly
]>]>cryptography), non-uniformly distributed random sources are
]>]>useless (unless you can process them and extract as much
]>]>uniformly-distributed data as possible).
]>
]>]If you know the nature of the correlation, it seems like it should not be
]>]too difficult to create an algorithm to remove that component of the raw
]>]data to produce a random stream with the properties you want.
]>
]>Sure, if you know them. The problem is people being told "A is random"
]>and not knowing about the biases and correlations.

]There have been several messages saying that thermal noise is 1/f. Well, I
]don't know what that means, but I assumed it was the kind of knowledge that
]would be useful in filtering out the correlation.

No, thermal noise has a 1/f component to it at low frequencies. (This
means that there are long time correlations in the output of the noise).
And yes, knowing they are there, and estimating them or measuring them,
allows you to "remove them". But you have to know you have them to
remove them.

Thermal noise from a resistor does not just have 1/f correlations. It
also has other correlations as well (eg stray capacitances/inductance in the
system introduce correlations, dead times in A-D converters introduce
other correlations, etc).


]>Actually I would not trust the low order bits in a piece of measuring

Bill Unruh

unread,
Jan 10, 2003, 5:30:53 PM1/10/03
to
Paul Crowley <pa...@JUNKCATCHER.ciphergoth.org> writes:

]Barry Margolin <bar...@genuity.net> writes:
]> There have been several messages saying that thermal noise is 1/f. Well, I
]> don't know what that means, but I assumed it was the kind of knowledge that
]> would be useful in filtering out the correlation.

]There's no need to filter out the correlation. Just make a
]conservative estimate of the entropy and feed all the raw data
]straight to Yarrow:

Yarrow is a PRNG. Its purpose is effective (though not thoeretical)
randomness amplification. The output of yarrow can be no more random
than the input.

What you really want is randomness distillation (ie given N bits with r
bits of effective redundnacy in those N bits, to produce a set of N-r
bits bits with no redundancy. ) not randomness amplification. (given N
bits produce M>>N bits of output which are "pseudo" random.

As I said, I think taking the output of the physical process, and then
feeding it through a cryptographic hash is probably (I have no proof) a
wonderful distillation process.


]http://www.counterpane.com/yarrow.html

Paul Crowley

unread,
Jan 10, 2003, 9:25:09 PM1/10/03
to
un...@string.physics.ubc.ca (Bill Unruh) writes:
> Yarrow is a PRNG. Its purpose is effective (though not thoeretical)
> randomness amplification. The output of yarrow can be no more random
> than the input.
>
> What you really want is randomness distillation (ie given N bits with r
> bits of effective redundnacy in those N bits, to produce a set of N-r
> bits bits with no redundancy. ) not randomness amplification. (given N
> bits produce M>>N bits of output which are "pseudo" random.
>
> As I said, I think taking the output of the physical process, and then
> feeding it through a cryptographic hash is probably (I have no proof) a
> wonderful distillation process.

I think you've misunderstood what Yarrow is - it includes a detailed
analysis of its distillation stage. I recommend reading the Yarrow
paper thoroughly if you're interested in this sort of thing.

http://www.counterpane.com/yarrow.htm

Carlos Moreno

unread,
Jan 10, 2003, 10:24:56 PM1/10/03
to

Bill Unruh wrote:

>
> ]I think you have a misconception here... (though I wonder if it
> ]is me who has the misconception).
>
> ]To me, the definition of "random" (if there is one), involves
> ]*only* unpredictability and lack of any fixed pattern.
>
> Correlations imply predictability.


I disagree! I mean, I agree with your point of view about what
unpredictability should mean in the cryptographic sense -- a
correlated source is a bad quality source where random numbers
are needed for a cryptographic application.

But what has practical usefulness and what doesn't is one thing,
and the definition of random and the definition of deterministic
are another thing.

I mean, the phrase "uniformly distributed random number" would
not make sense (well, it would be an ugly redundancy) according
to your definition.

Another thing: would an 8-bit uniformly distributed random
number is not really random?? After all, I can predict it:
it will be something between 0 and 255: there, *I predicted
it* (with absolute certainty, *I will be right* if I state
that as my prediction). An 8-bit random number may be useless
as an encryption key, but it is still a random number (an
encryption key requires a different kind of random number,
yes, but that doesn't mean anything...)


I insist: random implies only unpredictability (as in being
impossible to predict -- with absolute certainty -- *the value*,
the outcome of an experiment before we have had access to such
value or such outcome) and lack of any observable fixed pattern.

Uniformly distributed is an extra condition, which must be met
by any random number generator if we're going to use it for
cryptographic applications.

Carlos
--

Bill Unruh

unread,
Jan 10, 2003, 11:09:09 PM1/10/03
to
Paul Crowley <pa...@JUNKCATCHER.ciphergoth.org> writes:

]un...@string.physics.ubc.ca (Bill Unruh) writes:
]> Yarrow is a PRNG. Its purpose is effective (though not thoeretical)
]> randomness amplification. The output of yarrow can be no more random
]> than the input.
]>
]> What you really want is randomness distillation (ie given N bits with r
]> bits of effective redundnacy in those N bits, to produce a set of N-r
]> bits bits with no redundancy. ) not randomness amplification. (given N
]> bits produce M>>N bits of output which are "pseudo" random.
]>
]> As I said, I think taking the output of the physical process, and then
]> feeding it through a cryptographic hash is probably (I have no proof) a
]> wonderful distillation process.

]I think you've misunderstood what Yarrow is - it includes a detailed
]analysis of its distillation stage. I recommend reading the Yarrow
]paper thoroughly if you're interested in this sort of thing.

Yarrow is a PRNG. It does try to distill but it seems to just use a hash
to do the distillation, as I suggested above, but I must admit I only
quickly looked at the paper.


Bill Unruh

unread,
Jan 10, 2003, 11:15:01 PM1/10/03
to
Carlos Moreno <moreno_at_mo...@xx.xxx> writes:


]Bill Unruh wrote:

I am afraid that my definition is stonger-- the knowledge of any number
of n of outputs of the random number generator gives no information
(does not bias in any way) the following bytes of the generator output.
(given the constraints of the random numbers-- eg 8 bits, 16 bits) and
the distribution(uniform, guassian, poisson,....) Ie, there are no
correlations between the first n bits and the following bits. If I have
an 8 bit uniform stream, and if knowing one byte tells me that the next
byte must be one of 10 bytes means that the sequence is partially
predictable and is non-random.

But at this point were are arguing semantics.


]Uniformly distributed is an extra condition, which must be met

Paul Crowley

unread,
Jan 11, 2003, 8:25:11 AM1/11/03
to

It does as you suggest use a hash, but there are some tweaks in the
way it works. It's nothing revolutionary, it's a pretty obvious way
to build a PRNG, but it's the obvious design with all the details
worked out by a team of top-flight cryptographers. Follow their
design and you're much more likely to avoid mistakes that introduce
insecurity.

Carlos Moreno

unread,
Jan 11, 2003, 10:58:09 AM1/11/03
to
Bill Unruh wrote:
>
> ]I insist: random implies only unpredictability (as in being
> ]impossible to predict -- with absolute certainty -- *the value*,
> ]the outcome of an experiment before we have had access to such
> ]value or such outcome) and lack of any observable fixed pattern.
>
> I am afraid that my definition is stonger-- the knowledge of any number
> of n of outputs of the random number generator gives no information
> (does not bias in any way) the following bytes of the generator output.
> (given the constraints of the random numbers-- eg 8 bits, 16 bits) and
> the distribution(uniform, guassian, poisson,....)

See? The distribution: you seem to be contradicting yourself!
If there is a distribution, then you can say something about the
next coming number, and there is a bias.

Sure, correlation and non-flat distribution are two different
things, with different effects on the potential vulnerabilities
of a system that uses such random generator, but the thing is:
both are random; one of them is just not the type of random
that is useful (well, let's say optimal, because it may be
useful) for cryptographic applications.

> But at this point were are arguing semantics.

Well, yes, I'm pretty sure that we both do know and understand
what each other is talking about... :-)

Carlos
--

Walter Roberson

unread,
Jan 11, 2003, 11:44:21 AM1/11/03
to
In article <AmCT9.14$Gl5...@paloalto-snr1.gtei.net>,
Barry Margolin <bar...@genuity.net> wrote:
:Also, I intuitively expect that these correlations are minimized if you

:measure precisely enough and only use the low-order bits. E.g. firefly
:flashes and women's periods are known to sync up, but I presume it's only
:at low-precision measurements; if you measure fireflies to .1-second
:precision and women living together to 1-day preciseion, high correlations
:will be seen, but not if you measure them to the microsecond and use the
:lowest 2 decimal places.

I had a bit of difficulty finding flash rate information, but
here's one source:

http://www.colostate.edu/Depts/Entomology/courses/en507/papers_1999/matthies.htm

Male fireflies usually congregate in trees along blackish rivers.
In a group synchrony, they continuously flash species specific
display flashes at highly precise species specific interflash
intervals. An example of how precise these flies truly are comes
from Pt. Malaccaae. They showed a mean interflash of 556.3
milliseconds, with plus, minus 2.5 milliseconds standard deviation,
corresponding to a variation of plus, minus 0.9 percent.

With one standard deviation covering from 5538000 us to 5588000 us then
measurements of the tens of microseconds might indeed be random
-enough- for many purposes -- but possibly not good enough for everyone!

--
I don't know if there's destiny,
but there's a decision! -- Wim Wenders (WoD)

Guy Macon

unread,
Jan 11, 2003, 3:08:02 PM1/11/03
to

I am well aware of this. The problem is that the amplitude of the
current approaches infinity as the frequency aproaches DC. This
implies that I could take a resistor with a constant 1mA current
source across it, wait long enough, and measure millions of amps
going through it. Could this possibly be true?

A further problem I have is that intensity of the light coming from
quasars has 1/f noise fluctuations. That's a fairly long period for
the experiment, but then again, Quasars are very bright. Does their
brightness really increaese without limit as you wait longer and
longer?

--
Email Guy Macon guymacon+YOUR NAME GOES HE...@spamcop.net <html><head></head>
<body><a href="http://www.guymacon.com/resume.html" >Electrical engineer</a>
for hire: Los Angeles / Orange County CA USA 714-670-1687 See my resume at
http://www.guymacon.com/resume.html .</body><html><!-- www.guymacon.com -->

Guy Macon

unread,
Jan 11, 2003, 3:22:12 PM1/11/03
to


Bill Unruh wrote:


>
>Barry Margolin writes:
>
>>There have been several messages saying that thermal noise is 1/f. Well, I
>>don't know what that means, but I assumed it was the kind of knowledge that
>>would be useful in filtering out the correlation.

It means this: pick a frequency - any frequency.

Calculate one divided by that frequency.

That's how big the noise will be at that frequency relative to
the noise at other frequencies. If you want an absolute value,
measure one frequency and scale all of the answers accordingly.

To see why we have a problemwith this, try assuming that the
frequency is zero (DC) and do the calculation.

Bill Unruh

unread,
Jan 11, 2003, 5:02:38 PM1/11/03
to
Guy Macon <. http://www.guymacon.com/resume.html .> writes:


]Bill Unruh wrote:
]>


]>Guy Macon <. http://www.guymacon.com/resume.html .> writes:
]>
]>]Bill Unruh wrote:
]>
]>]> The noise structure of "thermal noise"-- eg the noise
]>]> coming from a resistor with a constant current source across it, has
]>]> correlations, especially at long times (1/f noise).
]>
]>]There is a problem with the theory that resistors have 1/f noise.
]>]What is the amplitude of the signal at DC? At very, very close
]>]to DC? Are the real-world answers really infinty and very, very
]>]large?
]>
]>]Of course it would take forever to answer the first question with
]>]an experiment and a very, very long time to answer the second.
]>
]>1/f noise is very very poorly understood. The experimental fact is that
]>almost all systems which have been measured for a "very very " long
]>time, show 1/f noise, and there is no indication that it disappears at
]>"very very very" long times.

]I am well aware of this. The problem is that the amplitude of the
]current approaches infinity as the frequency aproaches DC. This
]implies that I could take a resistor with a constant 1mA current
]source across it, wait long enough, and measure millions of amps
]going through it. Could this possibly be true?

No, with a constant current source, the current will remain constant.
However you could measure a million volts across it ( which you would if
the resistor developed an open circuit ) Or with a contant voltage
source, teh current would go th 1million amps, which it would if the
resistor developed a shor-- again a not impossible event. (of course at
that point your voltage source might crap out as well.)


]A further problem I have is that intensity of the light coming from

]quasars has 1/f noise fluctuations. That's a fairly long period for
]the experiment, but then again, Quasars are very bright. Does their
]brightness really increaese without limit as you wait longer and
]longer?

I guess so.

Note that it does not "increse without limit" it fulctuates, and the
fluctuation is sometimes very large.
]

]--

Bill Unruh

unread,
Jan 11, 2003, 5:08:31 PM1/11/03
to
Paul Crowley <pa...@JUNKCATCHER.ciphergoth.org> writes:

]un...@string.physics.ubc.ca (Bill Unruh) writes:
]> Yarrow is a PRNG. It does try to distill but it seems to just use a hash
]> to do the distillation, as I suggested above, but I must admit I only
]> quickly looked at the paper.

]It does as you suggest use a hash, but there are some tweaks in the
]way it works. It's nothing revolutionary, it's a pretty obvious way
]to build a PRNG, but it's the obvious design with all the details
]worked out by a team of top-flight cryptographers. Follow their
]design and you're much more likely to avoid mistakes that introduce
]insecurity.

I do not dispute that, but what I want is some sort of theoretical
demonstration that hashing a stream say of N bits with r bits of
reduncancy to M bits (M<N-r) makes those M bits have no redundancy (max
entropy). Clearly this depends on the hash ( eg, taking the first M bits
is a hash which clearly does not get rid of the reduncancy). But just as
clearly any hash is really of that form.but with a transformation on the
N bits befor choosing the "first M".

Carlos Moreno

unread,
Jan 12, 2003, 11:17:59 AM1/12/03
to

> In article <3E1EE997...@xx.xxx>,
> Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
>>I mean, after all, the exact position and state of
>>my body and my mind 1 minute from now will be one
>>and only one. Whatever it is, *it will be* that
>>particular *one* set of values; not two, not three
>>(yeah, we could talk about three possibilities; but
>>only one will happen -- the one that happens).
>
> The Many-Universes interpretation of Quantum Mechanics is based on the idea
> that *all* the possibilities happen. At each decision point, the universe
> splits up into different universes, each with different results.

Yes, I'm familiar with that "sci-fi notion" :-)

I mean, all the notions that this sub-thread deals with have
a certain sci-fi component. What I mean is that the many-
Universes notion, though it sounds interesting and fun
thinking about it and seeing movies about it, is completely
unsound from the scientific point of view.

That, of course, doesn't mean that it can not be the way
the Universe works.


But anyway, coming to the randomness issue, if we accept
that the universe follows the "simultaneous parallel
realities" model (the "many-universes"), then that doesn't
change at all what I was talking about. *The* universe
in which we are in, things followed *one* path, period.
That path is the same path for all times (from -infinity
to +infinity). That is, the infinite number of realities
were already an infinite number of "parallel realities"
one second after the big bang. Two different universes
may have seemed identical up until one point in time
at which they split into two. But one could argue that
no, they were already different -- maybe just one
molecule was different, and that is what at some point
had an effect noticeable at a larger scale.


Carlos
--

Guy Macon

unread,
Jan 12, 2003, 11:41:20 AM1/12/03
to


Bill Unruh wrote:


>
>Guy Macon < http://www.guymacon.com/resume.html > writes:

>
>]A further problem I have is that intensity of the light coming from
>]quasars has 1/f noise fluctuations. That's a fairly long period for
>]the experiment, but then again, Quasars are very bright. Does their

>]brightness really increase without limit as you wait longer and
>]longer?
>
>I guess so.
>
>Note that it does not "increase without limit" it fluctuates, and

>the fluctuation is sometimes very large.

Larger and larger as you wait longer, and longer, infinitely large
if you wait infinitely long.

Either that or 1/f does not hold over all frequencies. On the
high frequency end there is a point where 1/f get's so small
that only white noise is measured. Maybe there is a limit at
the low frequency end as well, but nobody has found it. If it's
below the limit set by the age of the universe, we might have to
wait a very long time before we can measure it.

--
Email: guymacon+PUT YOUR OWN NAME HE...@spamcop.net <br /><html><head>

</head><body><a href="http://www.guymacon.com/resume.html">Electrical

engineer</a> for hire: Los Angeles/Orange County CA USA, 714-670-1687
<br />See resume at: http://www.guymacon.com/resume.html</body><html>

Alun Jones

unread,
Jan 12, 2003, 2:49:46 PM1/12/03
to
In article <LzgU9.2437$sn2....@wagner.videotron.net>, Carlos Moreno
<moreno_at_mo...@xx.xxx> wrote:
>Yes, I'm familiar with that "sci-fi notion" :-)
>
>I mean, all the notions that this sub-thread deals with have
>a certain sci-fi component. What I mean is that the many-
>Universes notion, though it sounds interesting and fun
>thinking about it and seeing movies about it, is completely
>unsound from the scientific point of view.

Other than it being more convenient to your thinking of how the universe
"works", do you have any evidence that the single-universe notion is any more
"sound"?

Alun.
~~~~

[Please don't email posters, if a Usenet response is appropriate.]
--
Texas Imperial Software | Try WFTPD, the Windows FTP Server. Find us at
1602 Harvest Moon Place | http://www.wftpd.com or email al...@texis.com
Cedar Park TX 78613-1419 | VISA/MC accepted. NT-based sites, be sure to
Fax/Voice +1(512)258-9858 | read details of WFTPD Pro for XP/2000/NT.

Douglas A. Gwyn

unread,
Jan 12, 2003, 2:50:49 PM1/12/03
to
Carlos Moreno wrote:
> What I mean is that the many-
> Universes notion, though it sounds interesting and fun
> thinking about it and seeing movies about it, is completely
> unsound from the scientific point of view.
> That, of course, doesn't mean that it can not be the way
> the Universe works.

The many-worlds theory differs from conventional
quantum theory only in its model, not in its
predictions. Its explanatory value is the only
thing it has to offer, and to some of us that is
worthless.

> ... *The* universe in which we are in, things followed


> *one* path, period. That path is the same path for all
> times (from -infinity to +infinity).

That's not consistent with the many-worlds model.
There is only one backward branch, but an infinite
number of consistent forward branches. That
asymmetry is an undesirable feature of the theory.

lurker

unread,
Jan 12, 2003, 4:49:24 PM1/12/03
to

Many theories are counter intuitive when they are first proposed,
however over time (sometimes centuries) they seem to be better
embodied. Future generations may be much more receptive to a model
that is incomprehensible today. Of course that means the door to
revisionist history must always be left open just a little.

Guy Macon

unread,
Jan 12, 2003, 8:16:12 PM1/12/03
to


Douglas A. Gwyn wrote:

>The many-worlds theory differs from conventional
>quantum theory only in its model, not in its
>predictions. Its explanatory value is the only
>thing it has to offer, and to some of us that is
>worthless.

Conventional quantum theory differs from the many-
worlds theory only in its model, not in its


predictions. Its explanatory value is the only
thing it has to offer, and to some of us that is
worthless.

--

Douglas A. Gwyn

unread,
Jan 13, 2003, 12:34:10 AM1/13/03
to
lurker wrote:
> Future generations may be much more receptive to a model
> that is incomprehensible today.

What does that have to do with the many-worlds theory?
It's already perfectly comprehensible, at least to the
extent that *any* viable quantum theory is.

Douglas A. Gwyn

unread,
Jan 13, 2003, 12:35:17 AM1/13/03
to
Guy Macon wrote:
> Douglas A. Gwyn wrote:
>>The many-worlds theory differs from conventional
>>quantum theory only in its model, not in its
>>predictions. Its explanatory value is the only
>>thing it has to offer, and to some of us that is
>>worthless.
> Conventional quantum theory differs from the many-
> worlds theory only in its model, not in its
> predictions. Its explanatory value is the only
> thing it has to offer, and to some of us that is
> worthless.

Which came first, and which was motivated *only* as
an alternative "explanation"?

Guy Macon

unread,
Jan 13, 2003, 4:42:35 AM1/13/03
to

I have Pi answers. Pick which one you choose to be
my explaination using any method that suits you.

ANSWER A:

The many-worlds theory could have come first.
It was random chance that it didn't. In the
history of Science there are many examples of
alternate explanations. Sometimes the first
turns out to be true, sometimes the second. Or
the third.

ANSWER #1:

In this universe the conventional theory came
first, but in an infinite number of other
universes the many-worlds theory came first and
what we call the conventional theory was motivated
only as an alternative "explaination."

FIRST ANSWER:

Both Answer #1 and Answer A are true. Also, both
Answer #1 and Answer A are false. Not only that,
but this First Answer is also true and false and
both and neither. It's a Zen thing. Or a QM
thing. Or both. Or neither. Or <head explodes>

Alun Jones

unread,
Jan 13, 2003, 8:34:24 AM1/13/03
to
In article <3E225015...@null.net>, "Douglas A. Gwyn" <DAG...@null.net>
wrote:

>Which came first, and which was motivated *only* as
>an alternative "explanation"?

Ooh - I know the answer to this one!

"Earth is the center of the universe" came first, and then "Earth revolves
around the sun and rotates on its axis" came later motivated only as an
alternative explanation.

Mark H. Wood

unread,
Jan 13, 2003, 9:26:44 AM1/13/03
to
In comp.security.misc Alun Jones <al...@texis.com> wrote:
> In article <3E225015...@null.net>, "Douglas A. Gwyn" <DAG...@null.net>
> wrote:
>>Which came first, and which was motivated *only* as
>>an alternative "explanation"?
>
> Ooh - I know the answer to this one!
>
> "Earth is the center of the universe" came first, and then "Earth revolves
> around the sun and rotates on its axis" came later motivated only as an
> alternative explanation.

And then along came "'center of the universe' is a meaningless noise.
If you find it useful to think of a center, pick one that's easy to
work with."

I can't wait to see the next explanation. :-)

--
Mark H. Wood, Lead System Programmer mw...@IUPUI.Edu
MS Windows *is* user-friendly, but only for certain values of "user".

lurker

unread,
Jan 13, 2003, 9:54:38 AM1/13/03
to

You wrote:

>The many-worlds theory differs from conventional
>quantum theory only in its model, not in its
>predictions. Its explanatory value is the only
>thing it has to offer, and to some of us that is
>worthless.

In the future a useful model of quantum theory may allow extension of
knowledge, technology and common experience. Before the popular model
of a round earth there could be no planning for alternative spice
trade routes etc, etc.

There are often intermediate models to explain theories that help move
the popular culture. Although these models may often have no
immediate predictive advantage they set the stage for future
developments. You never get something but what you lose something
when you accept one model over another but that is assumed to be the
cost of evolving.


Barry Margolin

unread,
Jan 13, 2003, 10:59:50 AM1/13/03
to
In article <avuib4$qkb$2...@rainier.uits.indiana.edu>,

Mark H. Wood <mw...@mhw.ULib.IUPUI.Edu> wrote:
>And then along came "'center of the universe' is a meaningless noise.

Hey, if it's noise, can we use it to generate random numbers?

Just trying to get back to the topic of this thread.... :)

--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Alun Jones

unread,
Jan 13, 2003, 11:20:27 AM1/13/03
to
In article <3E21C719...@null.net>, "Douglas A. Gwyn" <DAG...@null.net>
wrote:

>The many-worlds theory differs from conventional
>quantum theory only in its model, not in its
>predictions. Its explanatory value is the only
>thing it has to offer, and to some of us that is
>worthless.

If all a model does is clarify a concept to people that previously thought it
to be completely opaque, that model is of some use; as long as the model can
be discarded when understanding requires it to be discarded.

That "some of us" don't need the explanation doesn't mean that others should
be left ignorant, if there is a valid model that achieves explanation.

Alun Jones

unread,
Jan 13, 2003, 11:20:28 AM1/13/03
to
In article <WnBU9.1$qe2...@paloalto-snr1.gtei.net>, Barry Margolin
<bar...@genuity.net> wrote:
>In article <avuib4$qkb$2...@rainier.uits.indiana.edu>,
>Mark H. Wood <mw...@mhw.ULib.IUPUI.Edu> wrote:
>>And then along came "'center of the universe' is a meaningless noise.
>
>Hey, if it's noise, can we use it to generate random numbers?
>
>Just trying to get back to the topic of this thread.... :)

It might work - all you'd have to do is poll people to discover where they
thought the centre of the universe was. Of course, you'd have to pick people
randomly ... d'oh!

Mark Gordon

unread,
Jan 14, 2003, 6:46:54 AM1/14/03
to
On Mon, 13 Jan 2003 16:20:28 GMT
al...@texis.com (Alun Jones) wrote:

> In article <WnBU9.1$qe2...@paloalto-snr1.gtei.net>, Barry Margolin
> <bar...@genuity.net> wrote:
> >In article <avuib4$qkb$2...@rainier.uits.indiana.edu>,
> >Mark H. Wood <mw...@mhw.ULib.IUPUI.Edu> wrote:
> >>And then along came "'center of the universe' is a meaningless
> >noise.
> >
> >Hey, if it's noise, can we use it to generate random numbers?
> >
> >Just trying to get back to the topic of this thread.... :)
>
> It might work - all you'd have to do is poll people to discover where
> they thought the centre of the universe was. Of course, you'd have to
> pick people randomly ... d'oh!

Would the people responding to a thread on usenet about where the centre
of the universe is be sufficiently random to produce a random
distribution of where the centre of the universe is believed to be?
;-)
--
Mark Gordon

Ian Abel

unread,
Jan 13, 2003, 4:12:57 PM1/13/03
to
<RANT>

Just on a physics point here...
Deterministic views of the universe died with Heisenvberg and chaos theory.

Heisenbergs principle states that it is *impossible* (if Quantum mechanics
is to be self consistent)
for one to measure BOTH the position and velocity of a particle
simulateously to an arbitrary degree of precision, i.e. the more you know
about one the LESS you can know about the other.

Chaos theory shows that two instances of a system obeying deterministic
rules can
diverge to an arbitrarily large degree for very smal discrepancies in the
initial conditions of the system.

In english ... if you have two cars whose inital speeds differ by 1 mm/s
then their positions of the cars will not be of the order of kilometers
unless Enornously large times are considered. This is because cars are not
chaotic, however if you take turbulent flow from a tap (faucet) then for a
very small increase in the intial velocity of the fluid (how open the tap
is) you can obtain very large differences in the behaviour of the flow,
another chaotic system is the perrenial favourite, the weather. Thus we can
only know the future of the system with infinitely accurate initial
conditions

So as lagre quantities of the universe is governed by chaotic systems, and
the heisenberg principle says we cannot obtain the intial conditions to the
needed accurracy, we are left with the result - we cannot predict the future
now matter how powerful our computers get, unless we enlist divine
intervention which may circumvent the uncertaincy principle.

No more determinisim.

</rant>

lurker <n...@nospam.org> wrote in message
news:3e1cd648...@netnews.worldnet.att.net...
> On Wed, 08 Jan 2003 20:52:59 -0500, Benjamin Goldberg
> <gol...@earthlink.net> wrote:
>
> >Carlos Moreno wrote:
> >[snip]
> >> After all, one could argue that given the *exact* values
> >> for *all* the phisical parameters (speed, position,
> >> electrical charge, etc.) of every single particle or point
> >> of matter one second after the "big bang", then you could
> >> (theoretically speaking) determine the *exact* state of
> >> the universe *at any given time* (i.e., position and
> >> speed of *every* single particle in the universe).
> >>
> >> Sad notion, isn't it?
> >
> >Supposing for a moment that you could build a "universe simulator" to
> >make this determination -- obviously, you're trying to measure how
> >things are "now", so you'd have to run the simulator up to the point in
> >time that we exist -- some problems, and questions arise:
> >
> > 1/ Is possible to run the simulated universe faster than the passage
> >of time of the actual universe? I suspect not.
> >
> > 2/ Would the simulated humans in the simulated universe be "real
> >people," with souls as real as our own?
> >
> > 3/ Assuming that there is a God, and that miracles *have* happened
> >(at least one miracle from at least one holy book), wouldn't that mean
> >that to make the simulated universe behave the same as our own did, we
> >would have to create miracles in the simulated universe, precisely the
> >same as God's miracles in our own real universe?
> >
> > 4/ If everything is determinable, do we have free will?
> >
> > 5/ If we could somehow run the simulator faster than the real
> >universe, then could we simulate the present and the future? (Keeping
> >in mind that to simulate the near past and the present, the simulator
> >would need to be simulating itself!)
> >
> >--
> >$..='(?:(?{local$^C=$^C|'.(1<<$_).'})|)'for+a..4;
> >$..='(?{print+substr"\n !,$^C,1 if $^C<26})(?!)';
> >$.=~s'!'haktrsreltanPJ,r coeueh"';BEGIN{${"\cH"}
> >|=(1<<21)}""=~$.;qw(Just another Perl hacker,\n);
>
> Wouldn't this scenario require a Bell's theorem/many worlds structure
> to the cosmos that had every possible branching possibility happening
> some where/time?


Guy Macon

unread,
Jan 15, 2003, 2:20:41 PM1/15/03
to


Ian Abel wrote:

>No more determinisim.

You were predestined to write that.


Carlos Moreno

unread,
Jan 16, 2003, 12:27:10 AM1/16/03
to
Barry Margolin wrote:
> In article <3E1CCC49...@xx.xxx>,
> Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
>
>>The discussion is mostly philosophical. Most (sane) people
>>should agree that thermal noise can be considered random
>>for all practical purposes from any conceivable point of
>>view.
>
> For crypto purposes, the issue isn't whether it's random, but whether it's
> "random enough" -- i.e. are there enough random bits to be useful in
> seeding an RNG? This depends, of course, on how precisely we're able to
> measure the temperature -- the most randomness is in the low-order bits

Err... That's not what thermal noise is. Nobody is going
to measure any temperature. You measure an electric signal
(the background noise), which is theoretically gaussian
white noise. (in practice, it's probably white enough
up until a few teraherz... i.e., you probably have more
than 10^10 or 10^20 bits of random information per second)

The name *thermal* noise comes from the fact that it is
the electrical noise produced by the interaction of moving
electrons/particles (moving because of temperature)

Carlos
--

Michael Sierchio

unread,
Jan 16, 2003, 2:01:25 AM1/16/03
to
Carlos Moreno wrote:

> Err... That's not what thermal noise is. Nobody is going
> to measure any temperature.

Quite so.

Temperature is already an average, so the useful stuff like
local entropy is filtered out. (Average random translational
kinetic energy is the thermodynamic definition of temperature).


Temperature is usually a continuous function, primarily because
of how it's measured, but there may be useful chaotic molecular
behavior that is local, and perturbed enough by measurement to
make it secret and non-repeatable.

Barry Margolin

unread,
Jan 16, 2003, 10:10:22 AM1/16/03
to
In article <FprV9.47554$vR3.7...@weber.videotron.net>,
Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:

>Barry Margolin wrote:
>> For crypto purposes, the issue isn't whether it's random, but whether it's
>> "random enough" -- i.e. are there enough random bits to be useful in
>> seeding an RNG? This depends, of course, on how precisely we're able to
>> measure the temperature -- the most randomness is in the low-order bits
>
>Err... That's not what thermal noise is. Nobody is going
>to measure any temperature.

For the purposes of my statement, the actual thing being measured is pretty
much irrelevant. The point of my statement was that you take the low-order
bits of whatever it is, as they're likely to have the least correlation to
anything (if they do, they're not low-order enough).

Paul Crowley

unread,
Jan 16, 2003, 11:25:11 AM1/16/03
to
Barry Margolin <bar...@genuity.net> writes:
> For the purposes of my statement, the actual thing being measured is
> pretty much irrelevant. The point of my statement was that you take
> the low-order bits of whatever it is, as they're likely to have the
> least correlation to anything (if they do, they're not low-order
> enough).

Will people please stop saying this? That just isn't how randomness
distillation is done.

You take all the input, and hash it, and use the hash output to key a
pseudorandom generator. For a detailed working out of the ideas I've
sketched here, read http://www.counterpane.com/yarrow.html
--
__ Paul Crowley
\/ o\ s...@paul.ciphergoth.org
/\__/ http://www.ciphergoth.org/

Barry Margolin

unread,
Jan 16, 2003, 1:36:18 PM1/16/03
to
In article <87k7h51...@saltationism.subnet.hedonism.cluefactory.org.uk>,

Paul Crowley <pa...@JUNKCATCHER.ciphergoth.org> wrote:
>Barry Margolin <bar...@genuity.net> writes:
>> For the purposes of my statement, the actual thing being measured is
>> pretty much irrelevant. The point of my statement was that you take
>> the low-order bits of whatever it is, as they're likely to have the
>> least correlation to anything (if they do, they're not low-order
>> enough).
>
>Will people please stop saying this? That just isn't how randomness
>distillation is done.

I don't know anything about how it *is* done. I'm speculating about other
ways it *could be* done.

>You take all the input, and hash it, and use the hash output to key a
>pseudorandom generator. For a detailed working out of the ideas I've
>sketched here, read http://www.counterpane.com/yarrow.html

But that still begs the question of what data to use as the "all the
input". If you're including data that has strong correlations, it's not as
good as if it didn't. And in many processes, high-order components of the
data have very strong correlations and are highly predictable.

For instance, suppose the raw data were the daily high temperatures at some
location, measured to 0.1 degree precision. The 10's digit exhibits very
strong correlation, since temperature in many places rarely fluctuates by
more than 10-20 degrees from day to day. The 1's digit is a little better,
although weather forecasters can usually predict it +/- a few degrees. But
if you just use the fractions, I don't think there's much correlation or
predictability at all -- temperature fluctuates more than that from minute
to minute.

So the input I would suggest feeding to the hash function would be just the
fractional components of the temperatures.

Nicol So

unread,
Jan 16, 2003, 2:19:35 PM1/16/03
to
Barry Margolin wrote:
>
> In article <87k7h51...@saltationism.subnet.hedonism.cluefactory.org.uk>,
> Paul Crowley <pa...@JUNKCATCHER.ciphergoth.org> wrote:
> >
> >You take all the input, and hash it, and use the hash output to key a
> >pseudorandom generator. For a detailed working out of the ideas I've
> >sketched here, read http://www.counterpane.com/yarrow.html
>
> But that still begs the question of what data to use as the "all the
> input". If you're including data that has strong correlations, it's not as
> good as if it didn't. And in many processes, high-order components of the
> data have very strong correlations and are highly predictable.
>
> ...

>
> So the input I would suggest feeding to the hash function would be just the
> fractional components of the temperatures.

The kind of precaution you suggested is not necessary. What you need to
make sure is that the input to the hash function contains a sufficient
*total* amount of entropy. "Sufficient" here means the number of bits in
the hash output, plus a margin. The margin needed depends on both the
distribution of samples and the hash function. You may or may not be
able to estimate accurately how many samples you'll need to have in the
hash input to have the requisite amount of entropy. When in doubt, use a
large margin.

(Highly correlated samples just mean that the amount of true entropy in
each sample is small(er). You can overcome that by combining more
samples to form the input to the hash function. The reason I said your
precaution is not necessary is that there are so many ways the samples
can be correlated. Instead of dealing with each possible mode of
correlation individually, it is better to just to let the hash function
take care of it.)

--
Nicol So
Disclaimer: Views expressed here are casual comments and should
not be relied upon as the basis for decisions of consequence.

Barry Margolin

unread,
Jan 16, 2003, 2:55:00 PM1/16/03
to
In article <3E2705C7...@no.spam.please>, Nicol So <see.signature> wrote:
>(Highly correlated samples just mean that the amount of true entropy in
>each sample is small(er). You can overcome that by combining more
>samples to form the input to the hash function. The reason I said your
>precaution is not necessary is that there are so many ways the samples
>can be correlated. Instead of dealing with each possible mode of
>correlation individually, it is better to just to let the hash function
>take care of it.)

Well, in the real world there are often limits on the amount of storage.
If the data is highly correlated, you may need to have an enormous number
of samples in your database so that you'll have enough total entropy. But
if you extract the less-correlated components of the data at sampling time,
you can get much more entropy in the same amount of storage.

A general purpose solution won't necessarily know which parts of the data
are more correlated, so it depends on you just feeding enough data to it.
But in many cases you know a priori where the correlations are, and it's
relatively easy to filter them out.

Nicol So

unread,
Jan 16, 2003, 3:49:45 PM1/16/03
to
Barry Margolin wrote:
>
> In article <3E2705C7...@no.spam.please>, Nicol So <see.signature> wrote:
> >(Highly correlated samples just mean that the amount of true entropy in
> >each sample is small(er). You can overcome that by combining more
> >samples to form the input to the hash function. The reason I said your
> >precaution is not necessary is that there are so many ways the samples
> >can be correlated. Instead of dealing with each possible mode of
> >correlation individually, it is better to just to let the hash function
> >take care of it.)
>
> Well, in the real world there are often limits on the amount of storage.
> If the data is highly correlated, you may need to have an enormous number
> of samples in your database so that you'll have enough total entropy. But
> if you extract the less-correlated components of the data at sampling time,
> you can get much more entropy in the same amount of storage.

In my previous message, I made the implicit assumption that data samples
are collected, combined, and then fed to the hash function. That was
just for the purpose of discussion. Real randomness harvesting schemes
need not be implemented that way. What you can do is to have an entropy
buffer which accumlates (by means of mixing) the entropy in data
samples, as each sample becomes available. You need to maintain an
estimate of the amount of entropy in the buffer. You harvest the content
of the entropy buffer when your estimate reaches a threshold. Then you
start anew.

(As a side remark, practical cryptographic hash functions don't require
the entire input to be buffered. It works pretty much the way described
above. The "entropy buffer" in the above corresponds to the chaining
variables in crypto hash functions.)

> A general purpose solution won't necessarily know which parts of the data
> are more correlated, so it depends on you just feeding enough data to it.
> But in many cases you know a priori where the correlations are, and it's
> relatively easy to filter them out.

In any case, to estimate the entropy in the buffer, you need to know
(lower bounds for) the entropy rates of the sources from which you
collect randomness.

Mok-Kong Shen

unread,
Jan 16, 2003, 4:05:56 PM1/16/03
to

Quite a time ago there was a thread about obtaining
random numbers with the timers of computers. Could
someone tell whether there is any interesting new
results in that approach? Thanks.

M. K. Shen

lurker

unread,
Jan 16, 2003, 4:42:57 PM1/16/03
to

The technique goes way back before usenet but there is an article
writtten by Terry Ritter that covers the basics. Try searching for
the constants 4.27 11 0.064% 6295 in sci.crypt and you will find a
nice discussion. Details of the assembler code necessary to actually
reach this quality of random results varies by hardware platform.

Bill Unruh

unread,
Jan 16, 2003, 7:18:57 PM1/16/03
to
Barry Margolin <bar...@genuity.net> writes:

]In article <FprV9.47554$vR3.7...@weber.videotron.net>,


]Carlos Moreno <moreno_at_mo...@xx.xxx> wrote:
]>Barry Margolin wrote:
]>> For crypto purposes, the issue isn't whether it's random, but whether it's
]>> "random enough" -- i.e. are there enough random bits to be useful in
]>> seeding an RNG? This depends, of course, on how precisely we're able to
]>> measure the temperature -- the most randomness is in the low-order bits
]>
]>Err... That's not what thermal noise is. Nobody is going
]>to measure any temperature.

]For the purposes of my statement, the actual thing being measured is pretty
]much irrelevant. The point of my statement was that you take the low-order
]bits of whatever it is, as they're likely to have the least correlation to
]anything (if they do, they're not low-order enough).

No. Depending on the physical model and measuring apparatus, low order
bits can also be correlated and biased. I do not think in this game
there is ever some magic bullet, which always works. Understand your
random source, and use that understanding to develop a good generator.

Bill Unruh

unread,
Jan 16, 2003, 7:23:29 PM1/16/03
to
Paul Crowley <pa...@JUNKCATCHER.ciphergoth.org> writes:

]Barry Margolin <bar...@genuity.net> writes:
]> For the purposes of my statement, the actual thing being measured is
]> pretty much irrelevant. The point of my statement was that you take
]> the low-order bits of whatever it is, as they're likely to have the
]> least correlation to anything (if they do, they're not low-order
]> enough).

]Will people please stop saying this? That just isn't how randomness
]distillation is done.

]You take all the input, and hash it, and use the hash output to key a
]pseudorandom generator. For a detailed working out of the ideas I've
]sketched here, read http://www.counterpane.com/yarrow.html

I guess I would say the same for your technique. This is another "magic
bullet" which sometimes works and sometimes fails.
A hash is a specific reduction of the data, just as taking the lowest
order bits is ( that is a hash too). Now, you may believe that this
particular hash that you use is uncorrelated with the correlations in
the data you have. In general that is probably true. But it could be
that your hash is as bad as taking the highest order bit of the data as
your hash (ie highly correlated). Know your data, use that knowledge to
distill randomness and then with the resultant stream which you have
tried your damndest to make a pure random stream, run it through a hash
( Why then use still another hash-- namely the stream cypher I do not
know, but if it makes you feel happy, go ahead)

It is loading more messages.
0 new messages