Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Feed randomness in /dev/random or /dev/urandom?

545 views
Skip to first unread message

Karl.Frank

unread,
Aug 25, 2017, 8:41:16 AM8/25/17
to
Assuming there's a TRNG available on a Linux machine.

The question is where to constantly inject, lets say 2048bit of
randomness drawn of the TRNG

- /dev/random

- /dev/urandom




--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Rich

unread,
Aug 25, 2017, 11:39:56 AM8/25/17
to
Karl.Frank <Karl....@freecx.co.uk> wrote:
> Assuming there's a TRNG available on a Linux machine.
>
> The question is where to constantly inject, lets say 2048bit of
> randomness drawn of the TRNG
>
> - /dev/random
>
> - /dev/urandom

Both create random numbers from the same CSPRNG and ultimately from the
same master entropy pool.

According to the urandom man page, writing to either mixes the new
stuff into the entropy pool:

Writing to /dev/random or /dev/urandom will update the entropy
pool with the data written, but this will not result in a higher
entropy count. This means that it will impact the contents read
from both files, but it will not make reads from /dev/random
faster.

So it seems that the outcome is identical for both, so the answer is:
whichever one you want to use.

Karl.Frank

unread,
Aug 25, 2017, 12:01:47 PM8/25/17
to
Thanks for clarifying.

Interesting though that using the following tool

https://github.com/rfinnie/twuewand/blob/master/rndaddentropy/rndaddentropy.c

and running

watch -n 1 cat /proc/sys/kernel/random/entropy_avail

in a separate terminal while feeding some bits like

dd if=/dev/trng bs=1 count=2048 | rndaddentropy

the increase is visible. Somehow this is perhaps a way to check whether
or not the bits are really properly injected.

But well, effectively

dd if=/dev/trng bs=1 count=2048 >> /dev/random

might produce the same result without updating the entropy counter.



--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Rich

unread,
Aug 25, 2017, 12:34:11 PM8/25/17
to
That is because the C code calls the RNDADDENTROPY ioctl on the device
(line 58):

if(ioctl(randfd, RNDADDENTROPY, &entropy) < 0) {

> Somehow this is perhaps a way to check whether or not the bits are
> really properly injected.
>
> But well, effectively
>
> dd if=/dev/trng bs=1 count=2048 >> /dev/random
>
> might produce the same result without updating the entropy counter.

It won't call the ioctl, so the counter won't update because of that
insertion method.

Karl.Frank

unread,
Aug 25, 2017, 1:45:38 PM8/25/17
to
I consider the counter update quite helpful, especially because it's
impressive how low this value normally is on a relatively active
internet server. Most of the time it does not even reach 200, sometimes
it falls down below 50 even.


>> Somehow this is perhaps a way to check whether or not the bits are
>> really properly injected.
>>
>> But well, effectively
>>
>> dd if=/dev/trng bs=1 count=2048>> /dev/random
>>
>> might produce the same result without updating the entropy counter.
>
> It won't call the ioctl, so the counter won't update because of that
> insertion method.


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Karl.Frank

unread,
Aug 25, 2017, 8:17:32 PM8/25/17
to
On 25.08.17 14:41, Karl.Frank wrote:
> Assuming there's a TRNG available on a Linux machine.
>
> The question is where to constantly inject, lets say 2048bit of
> randomness drawn of the TRNG
>
> - /dev/random
>
> - /dev/urandom
>

Coming across this discussion

https://unix.stackexchange.com/questions/324209/when-to-use-dev-random-vs-dev-urandom

most surprises was this answer

According to Theodore Ts'o on the Linux Kernel Crypto mailing list,
/dev/random has been deprecated for a decade. From Re: [RFC PATCH v12
3/4] Linux Random Number Generator:

Practically no one uses /dev/random. It's essentially a deprecated
interface; the primary interfaces that have been recommended for well
over a decade is /dev/urandom, and now, getrandom(2).



So, on system boot it seems to be appropriate seeding /dev/random first
and shortly thereafter /dev/urandom with some previously saved seeds -
or even restore the complete /dev/uramdom pool from a previously saved
state (as the random man page recommends).

Assuming that /dev/urandom always maintain a full 4096 bit pool lead me
to these further questions:


* is this a proper way to feed external entropy into the /dev/urandom pool?

dd if=/dev/trng bs=1 count=2048 >> /dev/urandom



* is there any way to verify that feeded entropy with the above
mentioned command get properly injected?




--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

MM

unread,
Aug 26, 2017, 4:55:11 AM8/26/17
to
On Saturday, 26 August 2017 01:17:32 UTC+1, Karl.Frank wrote:
> Coming across this discussion
>
> https://unix.stackexchange.com/questions/324209/when-to-use-dev-random-vs-dev-urandom
>
> most surprises was this answer
>
> According to Theodore Ts'o on the Linux Kernel Crypto mailing list,
> /dev/random has been deprecated for a decade. From Re: [RFC PATCH v12
> 3/4] Linux Random Number Generator:
>
> Practically no one uses /dev/random. It's essentially a deprecated
> interface; the primary interfaces that have been recommended for well
> over a decade is /dev/urandom, and now, getrandom(2).

Ferguson, Schneier and Kohno explain why very well in section 9.1.3 of
"Cryptography Engineering" ISBN 978-0-470-47424-2.

> So, on system boot it seems to be appropriate seeding /dev/random first
> and shortly thereafter /dev/urandom with some previously saved seeds -
> or even restore the complete /dev/uramdom pool from a previously saved
> state (as the random man page recommends).
>
> Assuming that /dev/urandom always maintain a full 4096 bit pool lead me
> to these further questions:

That is an assumption that is not viable.

> * is this a proper way to feed external entropy into the /dev/urandom pool?
>
> dd if=/dev/trng bs=1 count=2048 >> /dev/urandom

On FreeBSD, sure.

> * is there any way to verify that feeded entropy with the above
> mentioned command get properly injected?

See the source. If you could determine the security state of the interface
externally, you could attack it when its at its weakest. If you can't tell the
difference between weak state and strong state, you can't use that to
attack it.

M
--

Karl.Frank

unread,
Aug 27, 2017, 3:07:50 PM8/27/17
to
On 26.08.17 10:55, MM wrote:
> On Saturday, 26 August 2017 01:17:32 UTC+1, Karl.Frank wrote:
>> Coming across this discussion
>>
>> https://unix.stackexchange.com/questions/324209/when-to-use-dev-random-vs-dev-urandom
>>
>> most surprises was this answer
>>
>> According to Theodore Ts'o on the Linux Kernel Crypto mailing list,
>> /dev/random has been deprecated for a decade. From Re: [RFC PATCH v12
>> 3/4] Linux Random Number Generator:
>>
>> Practically no one uses /dev/random. It's essentially a deprecated
>> interface; the primary interfaces that have been recommended for well
>> over a decade is /dev/urandom, and now, getrandom(2).
>
> Ferguson, Schneier and Kohno explain why very well in section 9.1.3 of
> "Cryptography Engineering" ISBN 978-0-470-47424-2.
>
>> So, on system boot it seems to be appropriate seeding /dev/random first
>> and shortly thereafter /dev/urandom with some previously saved seeds -
>> or even restore the complete /dev/uramdom pool from a previously saved
>> state (as the random man page recommends).
>>
>> Assuming that /dev/urandom always maintain a full 4096 bit pool lead me
>> to these further questions:
>
> That is an assumption that is not viable.
>
>> * is this a proper way to feed external entropy into the /dev/urandom pool?
>>
>> dd if=/dev/trng bs=1 count=2048>> /dev/urandom
>
> On FreeBSD, sure.
>
This is some comment I found on Debian at /etc/init.d/urandom

#-------------------
...

Redirect output of subshell (not individual commands)
to cope with a misfeature in the FreeBSD (not Linux)
/dev/random, where every superuser write/close causes
an explicit reseed of the yarrow.
) >/dev/urandom

#-------------------

Regarding this the above mentioned feeding command should read

(dd if=/dev/trng bs=1 count=2048) >> /dev/urandom

if I interpret it correctly because otherwise the pool might become
"easier" to predict.


>> * is there any way to verify that feeded entropy with the above
>> mentioned command get properly injected?
>
> See the source. If you could determine the security state of the interface
> externally, you could attack it when its at its weakest. If you can't tell the
> difference between weak state and strong state, you can't use that to
> attack it.
>
> M

Of course this seems to be the logical approach. But on the other hand
you would might need to check the source for every Linux and *BSD
distribution. And furthermore after every update perhaps. In my opinion
a very hefty task.

But maybe the following answer on a similar question in the slackware
newsgroup and my rely on this matter might shed some light on the issue.

>
> Finally, for people who don't want to read "man 4 random", here is a
> snippet which (partially?) answers Karl's question:
> Writing to /dev/random or /dev/urandom will update the entropy
> pool with the data written, but this will not result in a higher
> entropy count. This means that it will impact the contents read
> from both files, but it will not make reads from /dev/random
> faster.
>
>
> Jim
>
That's what I'm really up to, seeding proper random data into the
entropy pool, either /dev/random or (if writing is permitted such as on
some virtual servers i.e. OpenVZ) to /dev/urandom.

Currently I'm observing on a virtual server which is under heavy ssh
attack that the available entropy of /dev/random frequently, at least
every 5 minutes, is decreasing to just 1 bit! And it stays at this
extremely low level for about 30 to 60 seconds. Then very slowly
increasing to 4, 10, 26, 45 and after about 3 minutes a jump up to round
about 120 to 150.

However constantly writing some 2048 bit FIPS-140-2 compliant data to
/dev/urandom keeps the available entropy of /dev/random on a regular
level of about 180.

So writing quality randomness to /dev/urandom seems to be a
reasonable mitigation against draining /dev/random.



--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

MM

unread,
Aug 27, 2017, 5:07:26 PM8/27/17
to
On Sunday, 27 August 2017 20:07:50 UTC+1, Karl.Frank wrote:
> This is some comment I found on Debian at /etc/init.d/urandom
>
> #-------------------
> ...
>
> Redirect output of subshell (not individual commands)
> to cope with a misfeature in the FreeBSD (not Linux)
> /dev/random, where every superuser write/close causes
> an explicit reseed of the yarrow.
> ) >/dev/urandom

Why do you (also) believe this is a misfeature?

> Regarding this the above mentioned feeding command should read
>
> (dd if=/dev/trng bs=1 count=2048) >> /dev/urandom
>
> if I interpret it correctly because otherwise the pool might become
> "easier" to predict.

How?

> Of course this seems to be the logical approach. But on the other hand
> you would might need to check the source for every Linux and *BSD
> distribution. And furthermore after every update perhaps. In my opinion
> a very hefty task.

Trust the programmers, the auditors, or audit the code yourself. You don't
have a lot of choice there.

> Currently I'm observing on a virtual server which is under heavy ssh
> attack that the available entropy of /dev/random frequently, at least
> every 5 minutes, is decreasing to just 1 bit! And it stays at this
> extremely low level for about 30 to 60 seconds. Then very slowly
> increasing to 4, 10, 26, 45 and after about 3 minutes a jump up to round
> about 120 to 150.

This is why FreeBSD doesn't do things that way.

> However constantly writing some 2048 bit FIPS-140-2 compliant data to
> /dev/urandom keeps the available entropy of /dev/random on a regular
> level of about 180.
>
> So writing quality randomness to /dev/urandom seems to be a
> reasonable mitigation against draining /dev/random.

Or go for a model that doesn't have a /dev/random that can be "drained".
See Ferguson /et al/ for the rationale.

M
--

Rob Warnock

unread,
Aug 28, 2017, 2:12:42 AM8/28/17
to
Karl.Frank <Karl....@Freecx.co.uk> wrote:
+---------------
| >> * is this a proper way to feed external entropy into the /dev/urandom pool?
| >>
| >> dd if=/dev/trng bs=1 count=2048>> /dev/urandom
| >
| > On FreeBSD, sure.
| >
| This is some comment I found on Debian at /etc/init.d/urandom
...
| misfeature in the FreeBSD (not Linux) /dev/random, where every
| superuser write/close causes an explicit reseed of the yarrow.
| ) >/dev/urandom
|
| Regarding this the above mentioned feeding command should read
|
| (dd if=/dev/trng bs=1 count=2048) >> /dev/urandom
|
| if I interpret it correctly because otherwise the pool might become
| "easier" to predict.
+---------------

They're trying too hard. Just do it this way:

dd if=/dev/trng ibs=1 count=2048 obs-2048 >>/dev/urandom

That will read 2048 one-byte blocks from the input
and then write one 2048-byte block to the output.


-Rob

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <http://rpw3.org/>
San Mateo, CA 94403

William Unruh

unread,
Aug 28, 2017, 4:16:24 AM8/28/17
to
Any program that uses /dev/random on linux is broken. If ssh uses it, it
is broken.


>
> This is why FreeBSD doesn't do things that way.
>
>> However constantly writing some 2048 bit FIPS-140-2 compliant data to
>> /dev/urandom keeps the available entropy of /dev/random on a regular
>> level of about 180.
>>
>> So writing quality randomness to /dev/urandom seems to be a
>> reasonable mitigation against draining /dev/random.

And if you wear an aluminium foil hat you will be protected against NSA
mindreading.


>
> Or go for a model that doesn't have a /dev/random that can be "drained".
> See Ferguson /et al/ for the rationale.

Or dont use /dev/random.

Note that it is never "drained" even on Linux. The half assed estimate
of randomess may become low, but that has about as much to do with
randomness as your horiscope in today's paper does.

>
> M

Rob Warnock

unread,
Aug 28, 2017, 4:25:10 AM8/28/17
to
Last night I wrote:
+---------------
| They're trying too hard. Just do it this way:
| dd if=/dev/trng ibs=1 count=2048 obs-2048 >>/dev/urandom
+---------------

Oops! Small typo [s/-/=/]:

dd if=/dev/trng ibs=1 count=2048 obs=2048 >>/dev/urandom

Karl.Frank

unread,
Aug 28, 2017, 6:07:27 AM8/28/17
to
On 27.08.17 23:07, MM wrote:
> On Sunday, 27 August 2017 20:07:50 UTC+1, Karl.Frank wrote:
>> This is some comment I found on Debian at /etc/init.d/urandom
>>
>> #-------------------
>> ...
>>
>> Redirect output of subshell (not individual commands)
>> to cope with a misfeature in the FreeBSD (not Linux)
>> /dev/random, where every superuser write/close causes
>> an explicit reseed of the yarrow.
>> )>/dev/urandom
>
> Why do you (also) believe this is a misfeature?
>
To my understanding on Linux data were simply injected into the
/dev/random entropy pool without any re-seeding which simply is what I'm
up to.


>> Regarding this the above mentioned feeding command should read
>>
>> (dd if=/dev/trng bs=1 count=2048)>> /dev/urandom
>>
>> if I interpret it correctly because otherwise the pool might become
>> "easier" to predict.
>
> How?
>
I do not know "how" exactly, but regarding to the man page:

A read from the /dev/urandom device will not block waiting for more
entropy. As a result, if there is not sufficient entropy in the entropy
pool, the returned values are theoretically vulnerable to a
cryptographic attack on the algorithms used by the driver. Knowledge of
how to do this is not available in the current non-classified
literature, but it is theoretically possible that such an attack may
exist. If this is a concern in your application, use /dev/random
instead.

So a re-seeding with know values or draining /dev/random might give an
attacker some advantage. But I'm not up to find any kind of attack but
just some way of increasing the available data in the pool.


>> Of course this seems to be the logical approach. But on the other hand
>> you would might need to check the source for every Linux and *BSD
>> distribution. And furthermore after every update perhaps. In my opinion
>> a very hefty task.
>
> Trust the programmers, the auditors, or audit the code yourself. You don't
> have a lot of choice there.
>
I do trust them. I was simply trying to figure out what implications
injection of randomness has in regards of increasing the available bits
in the entropy pools.


>> Currently I'm observing on a virtual server which is under heavy ssh
>> attack that the available entropy of /dev/random frequently, at least
>> every 5 minutes, is decreasing to just 1 bit! And it stays at this
>> extremely low level for about 30 to 60 seconds. Then very slowly
>> increasing to 4, 10, 26, 45 and after about 3 minutes a jump up to round
>> about 120 to 150.
>
> This is why FreeBSD doesn't do things that way.
>
>> However constantly writing some 2048 bit FIPS-140-2 compliant data to
>> /dev/urandom keeps the available entropy of /dev/random on a regular
>> level of about 180.
>>
>> So writing quality randomness to /dev/urandom seems to be a
>> reasonable mitigation against draining /dev/random.
>
> Or go for a model that doesn't have a /dev/random that can be "drained".
> See Ferguson /et al/ for the rationale.
>
Perhaps the mentioned example on the virtual server show that the
quality randomness get properly injected properly.

I'm looking for a simple but universal solution for a big bunch of
virtual servers running a widespread of different Linux distributions
and version. Tried rngd and haveged but for each and every OS
there's a different version with different behaviour making it a big
effort maintaining it's functionality, not to mention changes after
updates. Currently simply injecting data directly seems to be a proper,
universal and long lasting solution.


> M


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Karl.Frank

unread,
Aug 28, 2017, 6:27:24 AM8/28/17
to
Regarding to the man page:

When read, the /dev/random device will only return random bytes within
the estimated number of bits of noise in the entropy pool. /dev/random
should be suitable for uses that need very high quality randomness such
as one-time pad or key generation. When the entropy pool is empty, reads
from /dev/random will block until additional environmental noise is
gathered.

A read from the /dev/urandom device will not block waiting for more
entropy. As a result, if there is not sufficient entropy in the entropy
pool, the returned values are theoretically vulnerable to a
cryptographic attack on the algorithms used by the driver. Knowledge of
how to do this is not available in the current unclassified literature,
but it is theoretically possible that such an attack may exist. If this
is a concern in your application, use /dev/random instead.


/dev/random is what's proposed for cryptographic purposes on Linux.

Additionally I found this comment:

The entropy pool and blocking read of /dev/random is used as a
safe-guard to ensure the impossibility of predicting the random number;
if, for example, an attacker exhausted the entropy pool of a system, it
is possible, though highly unlikely with today's technology, that he can
predict the output of /dev/urandom which hasn't been reseeded for a long
time (though doing that would also require the attacker to exhaust the
system's ability to collect more entropies, which is also astronomically
improbably).

https://stackoverflow.com/questions/3690273/did-i-understand-dev-urandom

But I simply like to avoid such situations as described with what's
happening recently with that mentioned ssh attack were /dev/random get
drained down to 1 bit of entropy for about 60 seconds. Additionally it
seems that all virtual servers share the /dev/random device of the host
which has some grave implications on the randomness available inside the
virtual machine.


>
>>
>> Or go for a model that doesn't have a /dev/random that can be "drained".
>> See Ferguson /et al/ for the rationale.
>
> Or dont use /dev/random.
>
In regards ot the above you might right, just injecting into
/dev/urandom might suffice.


> Note that it is never "drained" even on Linux. The half assed estimate
> of randomess may become low, but that has about as much to do with
> randomness as your horiscope in today's paper does.
>
>>
>> M


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Rich

unread,
Aug 28, 2017, 6:35:20 AM8/28/17
to
Karl.Frank <Karl....@freecx.co.uk> wrote:
> I do trust them. I was simply trying to figure out what implications
> injection of randomness has in regards of increasing the available bits
> in the entropy pools.

Injection of randomness results in a more unpredictable random number
generator.

The entropy pool "fullness" measurment is more or less just a
hieuristic that does have a whole lot of real-world value. A low value
does not, in and of itself, mean someone (not of the NSA level) would
be able to predict the random values you obtain.

> I'm looking for a simple but universal solution for a big bunch of
> virtual servers running a widespread of different Linux distributions
> and version.

For virtual servers, they have very few sources of randomness to use
(no physical disk, no physical keyboard, etc.). So injecting some
external randomness in directly will make their outputs more random.
Beyond "more random" there is not much else that can be said. But
note, what you inject should itself be random. Simply injecting the
identical bitstream into all of them will not help the overall random
appearence of each a whole lot. So they each need a unique randomness
stream. I.e., don't do the equivalent of:

cat HW-RNG | tee server1 | tee server2 | tee server3

Karl.Frank

unread,
Aug 28, 2017, 6:44:38 AM8/28/17
to
On 28.08.17 10:20, Rob Warnock wrote:
> Last night I wrote:
> +---------------
> | They're trying too hard. Just do it this way:
> | dd if=/dev/trng ibs=1 count=2048 obs-2048>>/dev/urandom
> +---------------
>
> Oops! Small typo [s/-/=/]:
>
> dd if=/dev/trng ibs=1 count=2048 obs=2048>>/dev/urandom
>

Perhaps even might be faster?

dd if=/dev/trng ibs=2048 >>/dev/urandom


>
> -Rob
>
> -----
> Rob Warnock <rp...@rpw3.org>
> 627 26th Avenue <http://rpw3.org/>
> San Mateo, CA 94403
>


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Karl.Frank

unread,
Aug 28, 2017, 6:50:31 AM8/28/17
to
On 28.08.17 12:44, Karl.Frank wrote:
> On 28.08.17 10:20, Rob Warnock wrote:
>> Last night I wrote:
>> +---------------
>> | They're trying too hard. Just do it this way:
>> | dd if=/dev/trng ibs=1 count=2048 obs-2048>>/dev/urandom
>> +---------------
>>
>> Oops! Small typo [s/-/=/]:
>>
>> dd if=/dev/trng ibs=1 count=2048 obs=2048>>/dev/urandom
>>
>
> Perhaps even might be faster?
>
> dd if=/dev/trng ibs=2048 >>/dev/urandom
>
Yikes! this would fill up the pool completely.

Should read of course

dd if=/dev/trng ibs=256 >>/dev/urandom

Karl.Frank

unread,
Aug 28, 2017, 6:54:08 AM8/28/17
to
On 28.08.17 12:30, Rich wrote:
> Karl.Frank<Karl....@freecx.co.uk> wrote:
>> I do trust them. I was simply trying to figure out what implications
>> injection of randomness has in regards of increasing the available bits
>> in the entropy pools.
>
> Injection of randomness results in a more unpredictable random number
> generator.
>
That's what I hoped for.


> The entropy pool "fullness" measurment is more or less just a
> hieuristic that does have a whole lot of real-world value. A low value
> does not, in and of itself, mean someone (not of the NSA level) would
> be able to predict the random values you obtain.
>
>> I'm looking for a simple but universal solution for a big bunch of
>> virtual servers running a widespread of different Linux distributions
>> and version.
>
> For virtual servers, they have very few sources of randomness to use
> (no physical disk, no physical keyboard, etc.). So injecting some
> external randomness in directly will make their outputs more random.
> Beyond "more random" there is not much else that can be said. But
> note, what you inject should itself be random. Simply injecting the
> identical bitstream into all of them will not help the overall random
> appearence of each a whole lot. So they each need a unique randomness
> stream. I.e., don't do the equivalent of:
>
> cat HW-RNG | tee server1 | tee server2 | tee server3
>
Right, currently the data are captured and send to server1, then waiting
10 seconds, capturing again and sending to server2 ... and so on in a
loop...



--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Rich

unread,
Aug 28, 2017, 7:25:17 AM8/28/17
to
There is only one 'pool' on Linux. Both /dev/random and /dev/urandom
pull from the same pool. Injecting into either produces the same
result, mixing more randomness into the common pool.

You need to read this: https://www.2uo.de/myths-about-urandom/

And study the "Before Linux 4.8" and "From Linux 4.8 onward" diagrams
therein.

And esp. this part:

What about entropy running low?

It doesn't matter.

The underlying cryptographic building blocks are designed such that
an attacker cannot predict the outcome, as long as there was enough
randomness (a.k.a. entropy) in the beginning. A usual lower limit
for "enough" may be 256 bits. No more.

Considering that we were pretty hand-wavey about the term "entropy"
in the first place, it feels right. As we saw, the kernel's random
number generator cannot even precisely know the amount of entropy
entering the system. Only an estimate. And whether the model
that's the basis for the estimate is good enough is pretty unclear,
too.

Provided you can inject 256 bits worth of true randomness into each VM
when it starts, you'll be good, even if the "entropy" estimate by the
kernel drops to zero.

The bits you are seeing in the man pages about entropy dropping low
allowing possible breaks are old, outdated statements that have not yet
been cleaned up from the man pages.

Karl.Frank

unread,
Aug 28, 2017, 8:35:25 AM8/28/17
to
On 28.08.17 13:20, Rich wrote:
> Karl.Frank<Karl....@freecx.co.uk> wrote:
>> On 28.08.17 10:11, William Unruh wrote:
>>> On 2017-08-27, MM<mrvm...@gmail.com> wrote:
>>>> Or go for a model that doesn't have a /dev/random that can be
>>>> "drained". See Ferguson /et al/ for the rationale.
>>>
>>> Or dont use /dev/random.
>>>
>> In regards ot the above you might right, just injecting into
>> /dev/urandom might suffice.
>
> There is only one 'pool' on Linux. Both /dev/random and /dev/urandom
> pull from the same pool. Injecting into either produces the same
> result, mixing more randomness into the common pool.
>
Thanks, that's what I assumed but didn't find any document explaining
that behaviour so far. In the case of a single pool for both devices it
should be really sufficient to inject into /dev/urandom. Probably all
programs in need for cryptographically secure values will draw them from
/dev/urandom already in order to avoid the device blocking effect.


> You need to read this: https://www.2uo.de/myths-about-urandom/
>
Yeah, found it already following the links you posted in the "Cracking
random number generators (xoroshiro128+)" thread but perhaps not
read it thoroughly ;-)


> And study the "Before Linux 4.8" and "From Linux 4.8 onward" diagrams
> therein.
>
> And esp. this part:
>
> What about entropy running low?
>
> It doesn't matter.
>
> The underlying cryptographic building blocks are designed such that
> an attacker cannot predict the outcome, as long as there was enough
> randomness (a.k.a. entropy) in the beginning. A usual lower limit
> for "enough" may be 256 bits. No more.
>
> Considering that we were pretty hand-wavey about the term "entropy"
> in the first place, it feels right. As we saw, the kernel's random
> number generator cannot even precisely know the amount of entropy
> entering the system. Only an estimate. And whether the model
> that's the basis for the estimate is good enough is pretty unclear,
> too.
>
> Provided you can inject 256 bits worth of true randomness into each VM
> when it starts, you'll be good, even if the "entropy" estimate by the
> kernel drops to zero.
>
The most recent Linux distros already taking care of it like in Debian
with /etc/init.d/urandom

Additionally I'm passing some extra seeds drawn from the TRNG at boot time.

> The bits you are seeing in the man pages about entropy dropping low
> allowing possible breaks are old, outdated statements that have not yet
> been cleaned up from the man pages.
>
Nonetheless I suppose it will not hurt periodically adding some 2048bit
to the pool on virtual machines.


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

William Unruh

unread,
Aug 28, 2017, 5:20:50 PM8/28/17
to
Yes, that that is total horseshit as even Tso now admits. That man page has
done far more harm than all of the attackers trying to use weaknesses in
/dev/urandom have ever done.


>
> A read from the /dev/urandom device will not block waiting for more
> entropy. As a result, if there is not sufficient entropy in the entropy
> pool, the returned values are theoretically vulnerable to a
> cryptographic attack on the algorithms used by the driver. Knowledge of
> how to do this is not available in the current unclassified literature,
> but it is theoretically possible that such an attack may exist. If this
> is a concern in your application, use /dev/random instead.
>

Again horseshit.

>
> /dev/random is what's proposed for cryptographic purposes on Linux.

Yes, it is, and it is just wrong.

>
> Additionally I found this comment:
>
> The entropy pool and blocking read of /dev/random is used as a
> safe-guard to ensure the impossibility of predicting the random number;
> if, for example, an attacker exhausted the entropy pool of a system, it
> is possible, though highly unlikely with today's technology, that he can
> predict the output of /dev/urandom which hasn't been reseeded for a long
> time (though doing that would also require the attacker to exhaust the
> system's ability to collect more entropies, which is also astronomically
> improbably).
>

And those statements have been shown to be wrong for at least 10 years
by now.

If you can predict /dev/urandom, he can break all crypto on Linux. SSL is
useless then, whether using /dev/random or /dev/urandom
> https://stackoverflow.com/questions/3690273/did-i-understand-dev-urandom
>
> But I simply like to avoid such situations as described with what's
> happening recently with that mentioned ssh attack were /dev/random get
> drained down to 1 bit of entropy for about 60 seconds. Additionally it
> seems that all virtual servers share the /dev/random device of the host
> which has some grave implications on the randomness available inside the
> virtual machine.

Yes. Use /dev/urandom, and it will not happen.

William Unruh

unread,
Aug 28, 2017, 5:27:49 PM8/28/17
to
On 2017-08-28, Karl.Frank <Karl....@Freecx.co.uk> wrote:
This is just stupid and wrong.

>
> So a re-seeding with know values or draining /dev/random might give an
> attacker some advantage. But I'm not up to find any kind of attack but
> just some way of increasing the available data in the pool.

Sure. The attacker can sit there reading from /dev/random, causing all
programs that use it to block, tying up your whole machine. A far far
more effective attack than any wimpy "predict the stream" attack. And
one that can be carried out by any dolt.

>
>
>>> Of course this seems to be the logical approach. But on the other hand
>>> you would might need to check the source for every Linux and *BSD
>>> distribution. And furthermore after every update perhaps. In my opinion
>>> a very hefty task.
>>
>> Trust the programmers, the auditors, or audit the code yourself. You don't
>> have a lot of choice there.
>>
> I do trust them. I was simply trying to figure out what implications
> injection of randomness has in regards of increasing the available bits
> in the entropy pools.

No. It has no effect at all on the unpredictability of the pool. I am
not sure how a /dev/random read works, but at best all it does is to
reset the entropy estimate. Far simpler just to alter the code to reset
it by hand.


>
>
>>> Currently I'm observing on a virtual server which is under heavy ssh
>>> attack that the available entropy of /dev/random frequently, at least
>>> every 5 minutes, is decreasing to just 1 bit! And it stays at this
>>> extremely low level for about 30 to 60 seconds. Then very slowly
>>> increasing to 4, 10, 26, 45 and after about 3 minutes a jump up to round
>>> about 120 to 150.
>>
>> This is why FreeBSD doesn't do things that way.
>>
>>> However constantly writing some 2048 bit FIPS-140-2 compliant data to
>>> /dev/urandom keeps the available entropy of /dev/random on a regular
>>> level of about 180.
>>>
>>> So writing quality randomness to /dev/urandom seems to be a
>>> reasonable mitigation against draining /dev/random.
>>
>> Or go for a model that doesn't have a /dev/random that can be "drained".
>> See Ferguson /et al/ for the rationale.
>>
> Perhaps the mentioned example on the virtual server show that the
> quality randomness get properly injected properly.

No, it has nothing to do with the "quality of randomness". All it has to
do with is the value of the entropy estimate at best.

>
> I'm looking for a simple but universal solution for a big bunch of
> virtual servers running a widespread of different Linux distributions
> and version. Tried rngd and haveged but for each and every OS

STOP USING /dev/random

Gordon Burditt

unread,
Aug 28, 2017, 5:37:48 PM8/28/17
to
> #-------------------
> ...
>
> Redirect output of subshell (not individual commands)
> to cope with a misfeature in the FreeBSD (not Linux)
> /dev/random, where every superuser write/close causes
> an explicit reseed of the yarrow.
> ) >/dev/urandom
>
> #-------------------
>
> Regarding this the above mentioned feeding command should read
>
> (dd if=/dev/trng bs=1 count=2048) >> /dev/urandom

I don't understand this. Assuming for the moment that "a superuser
close" causes Something Bad (TM) to happen (but I'm not willing to
concede that the Something that happens is BAD without more
explanation), how does that re-write help?

If you run:
dd if=/dev/trng bs=1 count=2048 of=/dev/urandom
or
(dd if=/dev/trng bs=1 count=2048) >> /dev/urandom

as superuser, you get exactly one superuser close of /dev/urandom
either way.

>
>
>

Rob Warnock

unread,
Aug 29, 2017, 2:28:28 AM8/29/17
to
Karl.Frank <Karl....@Freecx.co.uk> wrote:
+---------------
| On 28.08.17 10:20, Rob Warnock wrote:
| > dd if=/dev/trng ibs=1 count=2048 obs-2048>>/dev/urandom
| > Oops! Small typo [s/-/=/]:
| > dd if=/dev/trng ibs=1 count=2048 obs=2048>>/dev/urandom
|
| Perhaps even might be faster?
|
| dd if=/dev/trng ibs=2048 >>/dev/urandom
+---------------

This breaks the goal in at least two ways [and maybe three]:

1. Without a count, it runs forever!

2. The point of the "obs" being the same size as the total
amount of data to be transferred ["count" * "ibs"] was to
make sure that "dd" does exactly *one* write to "/dev/urandom",
so that it reseeds only once. Without the "obs", the default
output buffer size of 512 will cause 2048/512 = 4 writes
and therefore *4* reseeds!

3. I have seen hardware devices/drivers that provided
*different* data with different-sized "read()"s
[usually odd little DIY devices/drivers tat were
hacked together quickly]. Unless the device in question
is documented differently, it *might* provide only one
actual random byte per read, regardless of how big the
read is! ;-} In that worst case, an "ibs=2018" might
provide *one* byte of random data and *2047* zero bytes!
[...or something.]

Karl.Frank

unread,
Aug 29, 2017, 7:51:15 AM8/29/17
to
On 29.08.17 08:24, Rob Warnock wrote:
> Karl.Frank<Karl....@Freecx.co.uk> wrote:
> +---------------
> | On 28.08.17 10:20, Rob Warnock wrote:
> |> dd if=/dev/trng ibs=1 count=2048 obs-2048>>/dev/urandom
> |> Oops! Small typo [s/-/=/]:
> |> dd if=/dev/trng ibs=1 count=2048 obs=2048>>/dev/urandom
> |
> | Perhaps even might be faster?
> |
> | dd if=/dev/trng ibs=2048>>/dev/urandom
> +---------------
>
> This breaks the goal in at least two ways [and maybe three]:
>
> 1. Without a count, it runs forever!
>
Actually it was

dd if=/dev/trng count=1 bs=256 >>/dev/urandom

just forgot to post the count.


> 2. The point of the "obs" being the same size as the total
> amount of data to be transferred ["count" * "ibs"] was to
> make sure that "dd" does exactly *one* write to "/dev/urandom",
> so that it reseeds only once. Without the "obs", the default
> output buffer size of 512 will cause 2048/512 = 4 writes
> and therefore *4* reseeds!
>
That's well an argument - I'll change my code accordingly.


> 3. I have seen hardware devices/drivers that provided
> *different* data with different-sized "read()"s
> [usually odd little DIY devices/drivers tat were
> hacked together quickly]. Unless the device in question
> is documented differently, it *might* provide only one
> actual random byte per read, regardless of how big the
> read is! ;-} In that worst case, an "ibs=2018" might
> provide *one* byte of random data and *2047* zero bytes!
> [...or something.]
>
I do a Fips-140-2 test on the gathered random data and only
injection them if the test does not fail.


>
> -Rob
>
Many thanks for the detailed advice.


> -----
> Rob Warnock <rp...@rpw3.org>
> 627 26th Avenue <http://rpw3.org/>
> San Mateo, CA 94403


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Karl.Frank

unread,
Aug 29, 2017, 7:55:23 AM8/29/17
to
I suppose you get me totally wrong. I'm not using /dev/random but some
programs on Linux do - hostapd for example. And I'm not willing to patch
the source code of those programs.

I'm only trying to maintain a reasonable amount of quality randomness
available in the entropy pool for those programs drawing of /dev/random.


>> there's a different version with different behaviour making it a big
>> effort maintaining it's functionality, not to mention changes after
>> updates. Currently simply injecting data directly seems to be a proper,
>> universal and long lasting solution.
>>
>>
>>> M
>>
>>


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Karl.Frank

unread,
Aug 30, 2017, 9:40:06 AM8/30/17
to
On 28.08.17 13:20, Rich wrote:
> Karl.Frank<Karl....@freecx.co.uk> wrote:
>> On 28.08.17 10:11, William Unruh wrote:
>>> On 2017-08-27, MM<mrvm...@gmail.com> wrote:
>>>> Or go for a model that doesn't have a /dev/random that can be
>>>> "drained". See Ferguson /et al/ for the rationale.
>>>
>>> Or dont use /dev/random.
>>>
>> In regards ot the above you might right, just injecting into
>> /dev/urandom might suffice.
>
> There is only one 'pool' on Linux. Both /dev/random and /dev/urandom
> pull from the same pool. Injecting into either produces the same
> result, mixing more randomness into the common pool.
>
That seem not to hold true in general on Linux.

I just discovered that Debian Jessie might maintain a different pool for
each of the two random devices. A come across this by monitoring the
start and connection attempts of WLan devices. Surprisingly the log file
of hostapd contain this information

random: Cannot read from /dev/random: Resource temporarily unavailable
random: Got 0/20 bytes from /dev/random
random: Only 0/20 bytes of strong random data available from /dev/random
random: Not enough entropy pool available for secure operations
WPA: Not enough entropy in random pool for secure operations - update
keys later when the first station connects

Apparently hostapd draw it's random values of /dev/random and not
/dev/urandom. And /dev/random is drained somehow after too many
connection attempts.

Additionally /dev/urandom should have a filled pool because it is
restored from a previously saved state on system boot.

A check reads

(root) gateway [~]# service urandom status
● systemd-random-seed.service - Load/Save Random Seed
Loaded: loaded (/lib/systemd/system/systemd-random-seed.service; static)
Active: active (exited) since Mon 2017-08-28 19:42:50 GMT; 13min ago
Docs: man:systemd-random-seed.service(8)
man:random(4)
Process: 233 ExecStart=/lib/systemd/systemd-random-seed load
(code=exited, status=0/SUCCESS)
Main PID: 233 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/systemd-random-seed.service

Aug 30 12:42:50 gateway systemd[1]: Starting Load/Save Random Seed...
Aug 30 12:42:50 gateway systemd[1]: Started Load/Save Random Seed.


If both random devices would share the same pool this event should never
occur.


> You need to read this: https://www.2uo.de/myths-about-urandom/
>
> And study the "Before Linux 4.8" and "From Linux 4.8 onward" diagrams
> therein.
>
> And esp. this part:
>
> What about entropy running low?
>
> It doesn't matter.
>
> The underlying cryptographic building blocks are designed such that
> an attacker cannot predict the outcome, as long as there was enough
> randomness (a.k.a. entropy) in the beginning. A usual lower limit
> for "enough" may be 256 bits. No more.
>
> Considering that we were pretty hand-wavey about the term "entropy"
> in the first place, it feels right. As we saw, the kernel's random
> number generator cannot even precisely know the amount of entropy
> entering the system. Only an estimate. And whether the model
> that's the basis for the estimate is good enough is pretty unclear,
> too.
>
> Provided you can inject 256 bits worth of true randomness into each VM
> when it starts, you'll be good, even if the "entropy" estimate by the
> kernel drops to zero.
>
> The bits you are seeing in the man pages about entropy dropping low
> allowing possible breaks are old, outdated statements that have not yet
> been cleaned up from the man pages.
>


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Rich

unread,
Aug 30, 2017, 11:44:07 AM8/30/17
to
Karl.Frank <Karl....@freecx.co.uk> wrote:
> On 28.08.17 13:20, Rich wrote:
>> Karl.Frank<Karl....@freecx.co.uk> wrote:
>>> On 28.08.17 10:11, William Unruh wrote:
>>>> On 2017-08-27, MM<mrvm...@gmail.com> wrote:
>>>>> Or go for a model that doesn't have a /dev/random that can be
>>>>> "drained". See Ferguson /et al/ for the rationale.
>>>>
>>>> Or dont use /dev/random.
>>>>
>>> In regards ot the above you might right, just injecting into
>>> /dev/urandom might suffice.
>>
>> There is only one 'pool' on Linux. Both /dev/random and /dev/urandom
>> pull from the same pool. Injecting into either produces the same
>> result, mixing more randomness into the common pool.
>>
> That seem not to hold true in general on Linux.

Incorrect. Please see (again) https://www.2uo.de/myths-about-urandom/
in the diagram "Before Linux 4.8".

Only one "pool" of "randomness" (at least up to 4.8). So unless you
are running kernels of 4.8 or greater, there is only one pool.

> I just discovered that Debian Jessie might maintain a different pool for
> each of the two random devices. A come across this by monitoring the
> start and connection attempts of WLan devices. Surprisingly the log file
> of hostapd contain this information

Monitoring the external outputs is insufficient to determine if there
is one or plural pools. Unless you've read Jessie's kernel source, any
monitoring is simply showing behavior, not internal design.

> Apparently hostapd draw it's random values of /dev/random and not
> /dev/urandom. And /dev/random is drained somehow after too many
> connection attempts.

/dev/random is the device that blocks when the kernels "entropy
estimate" falls below a threshold. You are seeing nothing more than
the design of Linux's /dev/random device.

> Additionally /dev/urandom should have a filled pool because it is
> restored from a previously saved state on system boot.

Not relevant. Only one pool. The /dev/urandom device (prior to kernel
4.8) pulls from the same pool as /dev/random. The difference is
/dev/urandom continues to deliver random output bytes no matter the
value of the "entropy estimate" counter while /dev/random will block
when the kernels "entropy estimate" falls too low.

> A check reads
>
> (root) gateway [~]# service urandom status
> ? systemd-random-seed.service - Load/Save Random Seed
> Loaded: loaded (/lib/systemd/system/systemd-random-seed.service; static)
> Active: active (exited) since Mon 2017-08-28 19:42:50 GMT; 13min ago
> Docs: man:systemd-random-seed.service(8)
> man:random(4)
> Process: 233 ExecStart=/lib/systemd/systemd-random-seed load
> (code=exited, status=0/SUCCESS)
> Main PID: 233 (code=exited, status=0/SUCCESS)
> CGroup: /system.slice/systemd-random-seed.service
>
> Aug 30 12:42:50 gateway systemd[1]: Starting Load/Save Random Seed...
> Aug 30 12:42:50 gateway systemd[1]: Started Load/Save Random Seed.
>
>
> If both random devices would share the same pool this event should never
> occur.

What "event"??? You've not explained what you see above so that any of
us can figure out what you mean.

MM

unread,
Aug 30, 2017, 1:26:00 PM8/30/17
to
On Wednesday, 30 August 2017 16:44:07 UTC+1, Rich wrote:
> Karl.Frank <Karl....@freecx.co.uk> wrote:
> > Apparently hostapd draw it's random values of /dev/random and not
> > /dev/urandom. And /dev/random is drained somehow after too many
> > connection attempts.
>
> /dev/random is the device that blocks when the kernels "entropy
> estimate" falls below a threshold. You are seeing nothing more than
> the design of Linux's /dev/random device.

If ${everybody} reckons that /dev/random is "bad" and to "just use /dev/urandom",
why is there still the distinction in Linux?

M
--

Rich

unread,
Aug 30, 2017, 1:39:00 PM8/30/17
to
Best guess: Linus' aversion to breaking userspace (which in general I
am in favor of his aversion).

But, it would seem, given that the "entropy" estimate is widely
considered to be snake-oil anyway, that simply switching /dev/random to
BSD's method of "only block right after booting before things have
initialized properly" would not break anything.

MM

unread,
Aug 30, 2017, 2:09:22 PM8/30/17
to
On Wednesday, 30 August 2017 18:39:00 UTC+1, Rich wrote:
> Best guess: Linus' aversion to breaking userspace (which in general I
> am in favor of his aversion).

That's plausible.

> But, it would seem, given that the "entropy" estimate is widely
> considered to be snake-oil anyway, that simply switching /dev/random to
> BSD's method of "only block right after booting before things have
> initialized properly" would not break anything.

When I wrote FreeBSD's Yarrow-based /dev/random in the early noughties,
it was *incredibly* controversial; folks found it very hard indeed to break
away from the "entropy bit count is conserved" and "every bit is valuable"
way of thinking. Luckily the Security Officer was on my side, but the
arguments persisted for *years*. How much broke? Nothing. Software that
read /dev/random just kept on working, and even better than before as
it didn't have to wait (except right after boot).

Years down the line, and everybody does it this way, and the papers agree
with this approach, mostly.

M
--

William Unruh

unread,
Aug 30, 2017, 2:23:58 PM8/30/17
to
No. It draws its values from the same pool. But it maintains an estimate
of the "randomness" of that pool. When that estimate goes to zero, the
pool is still full, but the estimate is zero and /dev/random refuses to
get any bytes from it. The message above is the result. It is an idiotic
message, and the wpa author should be whipped for using /dev/random. Go
through the code and replace any references ot random by urandom.

>
> Additionally /dev/urandom should have a filled pool because it is
> restored from a previously saved state on system boot.

Yes. And /dev/random also has a filled pool, it may just be that its
estimate of the randomness of the pool is half assed.

William Unruh

unread,
Aug 30, 2017, 2:25:34 PM8/30/17
to
Tradition. And incompetent man page.
Everyone is too afraid to change things.

>
> M

William Unruh

unread,
Aug 30, 2017, 2:39:02 PM8/30/17
to
Imagine what would happen if someone changed things so that, for
example, /dev/random were just made a pointer to /dev/urandom. There
would be an unholy shouting "The man page says /dev/urandom is unsafe.
How can you change things to just use urandom". And since the person
changing things is almost certainly not a cryptographer, they will not
have the self confidence that comes from knowledge to withstand the
storm. Perhaps Ted Tso could change things-- he was I believe the person
who originally wrote Lunux's /dev/random and /dev/urandom, and the man
page. (He was apparently relatively fresh out of his degree).

I think it is time that /dev/random and urandom got a rethink, as there
have been a number of criticisms of it since it was written.


>
>>
>> M

Richard Kettlewell

unread,
Aug 30, 2017, 2:47:24 PM8/30/17
to
The man page doesn’t say urandom is unsafe; and it says random is a
legacy interface. I don’t think you can blame the man page here.

“Legacy interface” answers Mark’s question, too, I think: it’s there for
the benefit of pre-existing applications that already depend on it, not
for use by code written today.

> I think it is time that /dev/random and urandom got a rethink, as
> there have been a number of criticisms of it since it was written.

AFAICS the rethink has happened and the outcome was the getrandom()
syscall.

--
https://www.greenend.org.uk/rjk/

MM

unread,
Aug 30, 2017, 3:04:26 PM8/30/17
to
On Wednesday, 30 August 2017 19:39:02 UTC+1, William Unruh wrote:
> Imagine what would happen if someone changed things so that, for
> example, /dev/random were just made a pointer to /dev/urandom.

I did, with FreeBSD (well, it was /dev/urandom -> /dev/random).

> There
> would be an unholy shouting "The man page says /dev/urandom is unsafe.
> How can you change things to just use urandom".

There was, even though the man page changed.

> And since the person
> changing things is almost certainly not a cryptographer, they will not
> have the self confidence that comes from knowledge to withstand the
> storm.

The Linux philosophy was too strong. I was interpreting the words of folks
such as Schneier, Ferguson etc, but these weren't believed for many years.

> Perhaps Ted Tso could change things-- he was I believe the person
> who originally wrote Lunux's /dev/random and /dev/urandom, and the man
> page. (He was apparently relatively fresh out of his degree).

I've never been a cryptographer, so I had to use the designs of other
people. The linux blocking was was a showstopper, and Yarrow offered
a solution in not mandating it.

> I think it is time that /dev/random and urandom got a rethink, as there
> have been a number of criticisms of it since it was written.

Many, that I've seen, but the BSD non-blocking approach now seems to
be accepted, and syscalls are also popular.

M
--

Rich

unread,
Aug 30, 2017, 3:08:24 PM8/30/17
to
MM <mrvm...@gmail.com> wrote:
> On Wednesday, 30 August 2017 18:39:00 UTC+1, Rich wrote:
>> Best guess: Linus' aversion to breaking userspace (which in general I
>> am in favor of his aversion).
>
> That's plausible.
>
>> But, it would seem, given that the "entropy" estimate is widely
>> considered to be snake-oil anyway, that simply switching /dev/random to
>> BSD's method of "only block right after booting before things have
>> initialized properly" would not break anything.
>
> When I wrote FreeBSD's Yarrow-based /dev/random in the early noughties,
> it was *incredibly* controversial; folks found it very hard indeed to break
> away from the "entropy bit count is conserved" and "every bit is valuable"
> way of thinking.

The "every bit valuable" mindset seems weird in light of the Von Neuman
bias elimination algorithm that throws bits away as it does its work.

Any idea why it was hard to break away from?

MM

unread,
Aug 30, 2017, 3:19:00 PM8/30/17
to
On Wednesday, 30 August 2017 20:08:24 UTC+1, Rich wrote:
> The "every bit valuable" mindset seems weird in light of the Von Neuman
> bias elimination algorithm that throws bits away as it does its work.
>
> Any idea why it was hard to break away from?

The prevailing thinking was that entropy was conserved, and there wasn't
enough mathematical knowledge to understand the subtleties of Von Neumann.

Many of the more concerted arguments came from folks who appeared to
take an attachment reminiscent of modern climate-denial. They just wouldn't
accept the opposing notion.

M
--

William Unruh

unread,
Aug 30, 2017, 3:27:52 PM8/30/17
to
On 2017-08-25, Karl.Frank <Karl....@Freecx.co.uk> wrote:
> Assuming there's a TRNG available on a Linux machine.
>
> The question is where to constantly inject, lets say 2048bit of
> randomness drawn of the TRNG

Why not just use rngd daemon. It's whole purpose is to feed bits from a
hardware random number source and feed it to the /dev/{random,urandom}
pools.
https://linux.die.net/man/8/rngd

(On Mageia and probably redhat it is in rngd-utils)
>
> - /dev/random
>
> - /dev/urandom
>
>
>
>

Karl.Frank

unread,
Aug 31, 2017, 12:25:14 PM8/31/17
to
On 30.08.17 17:39, Rich wrote:
> Karl.Frank<Karl....@freecx.co.uk> wrote:
>> On 28.08.17 13:20, Rich wrote:
>>> Karl.Frank<Karl....@freecx.co.uk> wrote:
>>>> On 28.08.17 10:11, William Unruh wrote:
>>>>> On 2017-08-27, MM<mrvm...@gmail.com> wrote:
>>>>>> Or go for a model that doesn't have a /dev/random that can be
>>>>>> "drained". See Ferguson /et al/ for the rationale.
>>>>>
>>>>> Or dont use /dev/random.
>>>>>
>>>> In regards ot the above you might right, just injecting into
>>>> /dev/urandom might suffice.
>>>
>>> There is only one 'pool' on Linux. Both /dev/random and /dev/urandom
>>> pull from the same pool. Injecting into either produces the same
>>> result, mixing more randomness into the common pool.
>>>
>> That seem not to hold true in general on Linux.
>
> Incorrect. Please see (again) https://www.2uo.de/myths-about-urandom/
> in the diagram "Before Linux 4.8".
>
Unfortunately due to an SSL error I can't reach the website - need to
figure out why I get

An error occurred during a connection to www.2uo.de.

Unable to generate public/private key pair.

(Error code: sec_error_keygen_fail)



> Only one "pool" of "randomness" (at least up to 4.8). So unless you
> are running kernels of 4.8 or greater, there is only one pool.
>
"Documentation and Analysis of the Linux Random Number Generator"

The evaluation of the suitability and quality of cryptographic
mechanisms is tasked to the BSI (Bundesamt für Sicherheit in der
Informationstechnik – Federal Office for Information Security) in
Germany. The BSI therefore initiated this study of the Linux Random
Number Generator (Linux-RNG).

https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/Studien/LinuxRNG/LinuxRNG_EN.pdf?__blob=publicationFile&v=5



Refering to the graphic on page 18 in the following document there are
mentioned two different pools, Input_pool and blocking_pool and also the
internal state of ChaCha20 for a recent Linux kernel 4.12.

Just a short quote:

The relationship of the entropy pools and the ChaCha20 DRNG to the noise
sources and to each other is visible in figure 2:

• The input_pool is the primary entropy pool that collects and
compresses the entropy from hardware events. That entropy pool has a
default size of 4,096 bits. The purpose of the input_pool is to collect
entropy from the noise sources and provide it to the secondary random
number generators discussed in the following two bullet points.

• The blocking_pool is fed with true random data from the input_pool.
From user space, this entropy pool can be accessed using the /dev/random
device file or the getrandom system call with the flag GRND_RANDOM. The
entropy pool has a size of 1,024 bytes.

• The ChaCha20 DRNG obtains its entropic seed data from the input_pool
as well and is accessible:

• from user space via/dev/urandom or the getrandom system call without
flags, and

• from kernel space via the get_random_bytes function.

The ChaCha20 DRNG has an internal state of 512 bits. However, only 256
bits, the key part of the ChaCha20 state, are filled with true random
data. Further details about the maintenance of the ChaCha20 state are
given in section 3.3.2.
What I meant by "event" is that the program should use /dev/urandom
instead of the blocking device /dev/random in order to prevent the
occurrence of the event suffering on missing randomness.


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Karl.Frank

unread,
Aug 31, 2017, 12:33:40 PM8/31/17
to
As mentioned earlier I'm looking for a simple but universal solution for
a big bunch of virtual servers running a widespread of different Linux
distributions and version.

Already tried rngd and haveged but for each and every OS there's a
different version with different behaviour making it a big effort
maintaining it's functionality, not to mention changes after updates.

So it's fairly easier just compiling rndaddentropy
(https://github.com/rfinnie/twuewand/blob/master/rndaddentropy/rndaddentropy.c)
on every different server and pipe the gathered data through.

./rndaddentropy --help
rndaddentropy, an RNDADDENTROPY ioctl wrapper
Copyright (C) 2012 Ryan Finnie <ry...@finnie.org>

Usage: $ENTROPY_GENERATOR | rndaddentropy

>>
>> - /dev/random
>>
>> - /dev/urandom
>>
>>
>>
>>


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

William Unruh

unread,
Aug 31, 2017, 1:46:01 PM8/31/17
to
On 2017-08-31, Karl.Frank <Karl....@Freecx.co.uk> wrote:
> On 30.08.17 21:23, William Unruh wrote:
>> On 2017-08-25, Karl.Frank<Karl....@Freecx.co.uk> wrote:
>>> Assuming there's a TRNG available on a Linux machine.
>>>
>>> The question is where to constantly inject, lets say 2048bit of
>>> randomness drawn of the TRNG
>>
>> Why not just use rngd daemon. It's whole purpose is to feed bits from a
>> hardware random number source and feed it to the /dev/{random,urandom}
>> pools.
>> https://linux.die.net/man/8/rngd
>>
>> (On Mageia and probably redhat it is in rngd-utils)
>
>
> As mentioned earlier I'm looking for a simple but universal solution for
> a big bunch of virtual servers running a widespread of different Linux
> distributions and version.
>
> Already tried rngd and haveged but for each and every OS there's a
> different version with different behaviour making it a big effort
> maintaining it's functionality, not to mention changes after updates.
>
> So it's fairly easier just compiling rndaddentropy
> (https://github.com/rfinnie/twuewand/blob/master/rndaddentropy/rndaddentropy.c)
> on every different server and pipe the gathered data through.
>
> ./rndaddentropy --help
> rndaddentropy, an RNDADDENTROPY ioctl wrapper
> Copyright (C) 2012 Ryan Finnie <ry...@finnie.org>
>
> Usage: $ENTROPY_GENERATOR | rndaddentropy
>

Or you could choose your favourite version of rngd and compile that on
all of the systems since you are recompiling anyway.

Karl.Frank

unread,
Aug 31, 2017, 5:25:22 PM8/31/17
to
Well I've already tried that but sadly run into compiler errors on some
OS due to missing function calls, definitions et cetera... Very
unpleasant and far more effort fixing all this than just compiling
rndaddentropy on every server.


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Karl.Frank

unread,
Sep 3, 2017, 1:18:03 PM9/3/17
to
On 31.08.17 19:41, William Unruh wrote:
Perhaps you may wonder why I'm so stubborn and do not follow your
recommendation, so I like to explain it a bit more detailed.

I have to maintain a widespread of more than 60 servers, some hardware
machines acting as hosts for virtual machines. VM's running on different
systems like Xen, OpenVZ and VMware ESXi. Some VM's are still 32bit
others 64bit now with Linux OS like RHLE/CentOS, Debian and Slackware
with kernels from 2.6 up to 4.4. That's why I need a simple but
universal solution for injecting randomness. The reason for this
injection is not only that there are these unpleasant SSH attacks. After
monitoring a big bunch of mail gateways I realised that the pool of
/dev/random is drained on such machines down to 1 or 2 bit sometimes for
more than 20 minutes. It seems that there are several thousand TSL
connects from spamming machines per hour which causes the pool drainage.
As it becomes evident I have to find some counter measurement. And as
rndaddentropy compiles like a charm even on the older Linux derivatives
I'll stick with it.




--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

William Unruh

unread,
Sep 3, 2017, 2:16:55 PM9/3/17
to
Nope I am just making suggestions, and you are perfectly free to do
whatever you want. My comments were based on the assupmtion that someone
else's program which had received a lot of testing was probably going to
be better than a home grown solutions. But your situtation might make
that advantage less important.

>
> I have to maintain a widespread of more than 60 servers, some hardware
> machines acting as hosts for virtual machines. VM's running on different
> systems like Xen, OpenVZ and VMware ESXi. Some VM's are still 32bit
> others 64bit now with Linux OS like RHLE/CentOS, Debian and Slackware

I have no idea what a VM does, but it would seem to me to be better if
it went down to the lowest layer. But more importantly NO PROGRAM SHOULD
EVER USE /dev/random. It should use /dev/urandom.
You would be far better off making sure that all your proograms obey
that rule, than trying to kludge your way around their bugs.

Rich

unread,
Sep 3, 2017, 5:29:53 PM9/3/17
to
Karl.Frank <Karl....@freecx.co.uk> wrote:
> Perhaps you may wonder why I'm so stubborn and do not follow your
> recommendation, so I like to explain it a bit more detailed.
>
> I have to maintain a widespread of more than 60 servers, some hardware
> machines acting as hosts for virtual machines. VM's running on different
> systems like Xen, OpenVZ and VMware ESXi. Some VM's are still 32bit
> others 64bit now with Linux OS like RHLE/CentOS, Debian and Slackware
> with kernels from 2.6 up to 4.4. That's why I need a simple but
> universal solution for injecting randomness.

Or, you tweak those 60 odd servers udev rules such that the /dev/random
device is actually a second copy of the /dev/urandom device, then
nothing blocks, whether it uses /dev/random or /dev/urandom to obtain
randomness.

This seems overall the simplest solution.

Karl.Frank

unread,
Sep 4, 2017, 6:21:46 AM9/4/17
to
I'm always grateful for your recommendations. Not sure however about the
quality of rngd. If you search for '"rngd" bugs' you'll get a fairly lot
of results. The home grown solution however based on rndaddentropy
simply injects the previously centralised gathered true randomness
directly into the entropy pool of all servers in a cyclic run if the
data passed a quick FIPS-140-2 test. For each server different data on
each push of course. No chance to mess it up because the source code of
rndaddentropy.c is fairly straightforward, neat and clean. As always I
prefer solutions which are fulfilling their task without being
overcomplex and are reduced to the absolute minimum necessary.

Something like the line below is sufficient for the task

dd if=local_256byte_randomness.file | ssh root@host 'dd | rndaddentropy'


>>
>> I have to maintain a widespread of more than 60 servers, some hardware
>> machines acting as hosts for virtual machines. VM's running on different
>> systems like Xen, OpenVZ and VMware ESXi. Some VM's are still 32bit
>> others 64bit now with Linux OS like RHLE/CentOS, Debian and Slackware
>
> I have no idea what a VM does, but it would seem to me to be better if
> it went down to the lowest layer. But more importantly NO PROGRAM SHOULD
> EVER USE /dev/random. It should use /dev/urandom.
> You would be far better off making sure that all your proograms obey
> that rule, than trying to kludge your way around their bugs.
>
Currently on some VM's I create a symlink from /dev/urandom to
/dev/random to redirect calls.


>> with kernels from 2.6 up to 4.4. That's why I need a simple but
>> universal solution for injecting randomness. The reason for this
>> injection is not only that there are these unpleasant SSH attacks. After
>> monitoring a big bunch of mail gateways I realised that the pool of
>> /dev/random is drained on such machines down to 1 or 2 bit sometimes for
>> more than 20 minutes. It seems that there are several thousand TSL
>> connects from spamming machines per hour which causes the pool drainage.
>> As it becomes evident I have to find some counter measurement. And as
>> rndaddentropy compiles like a charm even on the older Linux derivatives
>> I'll stick with it.
>
>
>>
>>
>>
>>


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=

Karl.Frank

unread,
Sep 4, 2017, 6:28:59 AM9/4/17
to
Right. I'm just added these lines to /etc/rc.local on some VM's and
currently monitoring what happens after a reboot

rm -f /dev/random
ln -s /dev/urandom /dev/random

Additionally the locally captured randomness gets piped through ssh to
some other VM's where a pool injection via rndaddentropy to /dev/random
is possible with the command

dd if=local_256byte_randomness.file | ssh root@host 'dd | rndaddentropy'


--
cHNiMUBACG0HAAAAAAAAAAAAAABIZVbDdKVM0w1kM9vxQHw+bkLxsY/Z0czY0uv8/Ks6WULxJVua
zjvpoYvtEwDVhP7RGTCBVlzZ+VBWPHg5rqmKWvtzsuVmMSDxAIS6Db6YhtzT+RStzoG9ForBcG8k
G97Q3Jml/aBun8Kyf+XOBHpl5gNW4YqhiM0=
0 new messages