/dev/random

732 views
Skip to first unread message

JP

unread,
Aug 27, 2012, 2:11:13 AM8/27/12
to randomness...@googlegroups.com

The paper of Heninger at al. recommends  that "libraries should default to using the most secure mechanisms available" (section 7). The authors also explain why this recommendation is seldom followed "The blocking behavior means that applications that read from random can hang unpredictably, and, in a headless device without human input or disk entropy, there may never be enough input for a read to complete." (6.2)

For an example of /dev/random's behavior, here's timings when copying 160 bytes from /dev/random and /dev/urandom on a Linux machine:

$ dd if=/dev/random of=out bs=16 count=10
dd: warning: partial read (8 bytes); suggest iflag=fullblock
1+9 records in
1+9 records out
88 bytes (88 B) copied, 8.28505 s, 0.0 kB/s
$ dd if=/dev/urandom of=out bs=16 count=10
10+0 records in
10+0 records out
160 bytes (160 B) copied, 9.5094e-05 s, 1.7 MB/s

We see that /dev/random is orders of magnitude "slower", because it waits til enough entropy is accumulated to produce output. In the above example I was typing and clicking in Chrome while output was produced. If I don't touch the computer, the result looks as follows:

$ dd if=/dev/random of=out bs=16 count=10
dd: warning: partial read (13 bytes); suggest iflag=fullblock
5+5 records in
5+5 records out
125 bytes (125 B) copied, 27.3381 s, 0.0 kB/s

What should developers of cryptography libraries/toolkits/etc. do? How should the API look like? For example Cryptopp (http://www.cryptopp.com/) defines a method BlockingRng(). A disadvantage is that approach is that it is not portable (Windows' CryptGenRandom is not blocking; less of an issue than /dev/urandom apparently, as we don't see many embedded systems running Windows...). 

Or should there be something in between /dev/random and /dev/urandom, like a not-so-blocking randomness source?

 

Message has been deleted

JP

unread,
Aug 27, 2012, 2:16:36 AM8/27/12
to randomness...@googlegroups.com
You'll also notice that in both examples, dd copied fewer than 160 bytes from /dev/random... 

D. J. Bernstein

unread,
Aug 27, 2012, 5:28:47 AM8/27/12
to randomness...@googlegroups.com
JP writes:
> Or should there be something in between /dev/random and /dev/urandom,
> like a not-so-blocking randomness source?

Yes, I think so. /dev/random differs from /dev/urandom in two ways that
shouldn't be tied together:

* Useful feature: /dev/random blocks if it has _never_ seen enough
(estimated) entropy. This is useful because it protects against a
very common type of misconfiguration of small devices.

What we really want, of course, is for the devices to always be
configured correctly. But if someone screws this up then blocking
is a much better response than producing non-random results.

(Of course, a sufficiently severe screwup---e.g., initializing many
devices with the _same_ random seed file---will still break this
feature. The core problem to address is the configuration error.)

* Superstitious nonsense: /dev/random blocks if it isn't _continuing_
to see a bit of new entropy to cover each bit of output.

The Linux /dev/urandom manual page promotes this nonsense: "the
returned values are theoretically vulnerable to a cryptographic
attack on the algorithms used by the driver". It's crazy to worry
about someone breaking SHA-512 as an entropy mixer or (bitsliced)
AES-256-CTR as an output-and-state-update mechanism; if we can't
even get such easy things right then there's no hope for securing
encryption, signatures, etc.

What cryptographic libraries actually want is the useful feature without
the superstitious nonsense. There are several ways that operating
systems could make this available:

* Upgrade /dev/urandom to add the useful feature. This protects all
the existing /dev/urandom applications, but it also creates a
portability problem, an important ambiguity in /dev/urandom---the
library writer doesn't _know_ /dev/urandom has been upgraded. The
only way for the library writer to be confident about the useful
feature is to fall back to /dev/random---which is a performance
disaster.

* Add a new device, say /dev/crandom, that has the useful feature
without the superstitious nonsense. The library writer can then
reasonably insist on using /dev/crandom. Upgrading will take a few
years but the result of the upgrade will actually work as desired.

* Upgrade /dev/random to remove the superstitious nonsense. I don't
see how this will allow easier deployment than adding /dev/crandom.

What I'd actually like to see isn't exactly /dev/crandom, but rather a

#include <sys/cryptorandom.h>
cryptorandom(unsigned char *data,size_t datalen);

syscall that never fails---merely blocks if it has never seen enough
entropy. /dev/*random will fail if the file table fills up, whether by
accident or by a denial-of-service attack; I think this is a really
stupid failure mode for something as fundamental as obtaining random
bytes from the OS.

---Dan

Jean-Philippe Aumasson

unread,
Aug 27, 2012, 5:46:17 AM8/27/12
to D. J. Bernstein, randomness...@googlegroups.com
It's probably easy to distinguish the useful feature from supertitious
nonsense: ask the rand_init() function to read 1 byte from
/dev/random. If it does't fail, it means that the RNG did receive
sufficient initial entropy, thus that we can live with /dev/urandom.

The question is how acceptable is the implied slowdown (if any). On
desktops/servers it is probably negligible.
> --
> You received this message because you are subscribed to the Google Groups "Randomness generation" group.
> To post to this group, send email to randomness...@googlegroups.com.
> To unsubscribe from this group, send email to randomness-gener...@googlegroups.com.
> Visit this group at http://groups.google.com/group/randomness-generation?hl=en.
>
>

Camille Vuillaume

unread,
Aug 27, 2012, 10:15:52 AM8/27/12
to randomness...@googlegroups.com, D. J. Bernstein
Wouldn't it be OK/sufficient to have a PRNG seeded (once) with /dev/random?
BTW the random.c I have seen uses SHA-1 (actually folded) as output mechanism and LFSRs as entropy mixers (and also SHA-1 seemingly for forward security). But I totally second the idea of using AES-256-CTR as output and state update.

D. J. Bernstein

unread,
Aug 27, 2012, 11:55:54 AM8/27/12
to randomness...@googlegroups.com
Jean-Philippe Aumasson writes:
> It's probably easy to distinguish the useful feature from supertitious
> nonsense: ask the rand_init() function to read 1 byte from
> /dev/random. If it does't fail, it means that the RNG did receive
> sufficient initial entropy, thus that we can live with /dev/urandom.

Consider the small devices vulnerable to "Mining your ps and qs" etc.
These devices generate essentially zero entropy---but if you put 256
bits of entropy into a preseed file, managed in the usual way, then they
become completely safe. /dev/urandom then works fine, from both a speed
perspective and a security perspective, while reading even a single byte
per process from /dev/random is a performance disaster.

I would guess that there's also a performance problem on larger servers
that use UNIX-style service (one process per job) rather than monolithic
service. Can /dev/random handle hundreds or thousands of requests per
second?

Camille Vuillaume writes:
> Wouldn't it be OK/sufficient to have a PRNG seeded (once) with /dev/random?

Same performance problem as above---plus the security problems that will
inevitably result from having one application writer after another try
to duplicate the same security-critical functions. We _want_ the OS to
be centralizing not just the entropy gathering but also the PRNG.

---Dan

Camille Vuillaume

unread,
Aug 27, 2012, 7:17:33 PM8/27/12
to randomness...@googlegroups.com
Dan, we are on the same line, this is what I meant with my PRNG comment:
- a modern kernel PRNG design instead of 2 calls to SHA-1 plus mixing with LFSRs for generating 80 bits
- real forward security
- a one-time, blocking seeding mechanism
- an maybe a reseeding mechanism between boots since it seems to be quasi mandatory

Have someone looked at the way entropy is calculated? Is it even possible/easy to calculate entropy? I have not looked in details but the current method looks suspicious.

Camille

D. J. Bernstein

unread,
Aug 27, 2012, 8:17:50 PM8/27/12
to randomness...@googlegroups.com
Camille Vuillaume writes:
> - a modern kernel PRNG design instead of 2 calls to SHA-1 plus mixing
> with LFSRs for generating 80 bits
> - real forward security

Hmmm. The simplest suggestion in

http://csrc.nist.gov/groups/ST/toolkit//documents/rng/HashBlockCipherDRBG.pdf

is to overwrite a buffer (0,V,K) with the string

AES_K(V),AES_K(V+1),...,AES_K(V+15)

and then use the initial part of the string as output bits (which of
course should be cleared as they're used) producing a new (0,V,K). This
is quantitatively close to AES in security, including forward security,
and it's very fast. Does Linux have a reason to do something more
complicated?

Actually, the suggestion allows a user-controlled variable instead of
15, but I don't see any reason for this. It's not as if the output bits
are talking to each other---all that matters is that each part of the
output stream is used only once.

I would switch to 256-bit Salsa20 for the usual reasons (better speed on
most platforms, quantitatively better PRF security, higher security
margin, etc.) but I wouldn't expect AES to actually cause any problems
here. Even "broken" stream ciphers will be awfully difficult to exploit
in this context.

---Dan

Jean-Philippe Aumasson

unread,
Aug 28, 2012, 3:43:46 AM8/28/12
to Camille Vuillaume, randomness...@googlegroups.com
> - an maybe a reseeding mechanism between boots since it seems to be quasi mandatory
>

You mean in addition to the "standard" reseeding (that provides
backward security)?.

> Have someone looked at the way entropy is calculated? Is it even possible/easy to calculate entropy? I have not looked in details but the current method looks suspicious.
>

2 recent papers attempted to address that:
http://eprint.iacr.org/2012/251.pdf
http://eprint.iacr.org/2012/487.pdf


Some related references:

- on the hardware side, this CRI report on the Ivy Bridge RNG has a
section about entropy estimation:
http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf

- Dan Kaminsky revisiting Matt Blaze's TrueRand idea to collect
entropy from clocks jitter:
http://www.cryptography.com/public/pdf/Intel_TRNG_Report_20120312.pdf

Camille Vuillaume

unread,
Aug 28, 2012, 9:21:32 AM8/28/12
to randomness...@googlegroups.com
Thanks for the links!

> an Kaminsky revisiting Matt Blaze's TrueRand idea to collect
> entropy from clocks jitter:

You probably means this:
http://dankaminsky.com/2012/08/15/dakarand/

> http://eprint.iacr.org/2012/251.pdf
Extract from the paper: "The main disadvantage is that there
is no theoretical connection to any entropy definition".
I guess that settles it.

> http://eprint.iacr.org/2012/487.pdf
Now that looks interesting. But the first thing that came to my mind
after scanning it is "really?".
It's really strange that there is no comment or explanation about
entropy estimation in the code.
At first I thought that the method assumed a particular distribution,
calculated the estimator and derived the average entropy but it does not
seem to fit.

Also relevant is the following common criteria guideline, pages 93 and 127.
https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Zertifierung/Interpretation/AIS31_Functionality_classes_for_random_number_generators.pdf?__blob=publicationFile

Jean-Philippe Aumasson

unread,
Aug 28, 2012, 9:48:57 AM8/28/12
to Camille Vuillaume, randomness...@googlegroups.com
More entropy (via Samuel Neves):

"HAVEGE (HArdware Volatile Entropy Gathering and Expansion) is a
user-level software unpredictable random number generator for
general-purpose computers that exploits these modifications of the
internal volatile hardware states as a source of uncertainty."

https://www.irisa.fr/caps/projects/hipsor/

Jon Callas

unread,
Aug 28, 2012, 5:40:01 PM8/28/12
to randomness...@googlegroups.com
One of the things to keep in perspective is that the problem that was found was in *seeding* the random number generator, not in the generation itself. Moreover, it was in the initial seed of the generator upon the first boot of the system.

Any solution that tweaks the generation process while leaving the initial seeding as an exercise for the reader is just shuffling the deck chairs. That isn't a crypto problem, it's an engineering problem. It's a design problem. But it's the real problem.

(Also, as I said at CRYPTO, there's no such thing as "/dev/random" as there are likely as many architectures for /dev/random as there are unices. Possibly more, since there are people now running around creating new architectures which may or may not solve the real problem.)

Jon

Camille Vuillaume

unread,
Aug 28, 2012, 7:26:18 PM8/28/12
to randomness...@googlegroups.com
This is true, but the current design seems completely ad-hoc, is not very well domented and we cannot even hope to formally and rigorously establish its security. Fixing the seeding upon boot problem is important but it's just one aspect of the problem I think.

If adding an init script that restores the entropy pool is the solution, why hasn't it be done and why is it not automatically enforced?

Jon Callas

unread,
Aug 28, 2012, 9:51:05 PM8/28/12
to randomness...@googlegroups.com

On Aug 28, 2012, at 4:26 PM, Camille Vuillaume wrote:

> This is true, but the current design seems completely ad-hoc, is not very well domented and we cannot even hope to formally and rigorously establish its security. Fixing the seeding upon boot problem is important but it's just one aspect of the problem I think.

You're assuming again that there's *a* design. I know for a fact that Linux, FreeBSD, and Mac OS X all have different /dev/random designs. I expect that OpenBSD has a different one because they can't do anything the way anyone else does, it has to be more "secure." That's their schtick.

Furthermore, there is no such thing as "Linux" as an OS. Linux is a *kernel*, and that is folded into different *distributions* that compile from source and throw in all the extra stuff you need to have a real operating system. That source ends up having different build decisions made to it, so it's completely believable that there exists at least one Linux that has a different /dev/random from some other Linux. Remember the Debian bug? It happened because of distribution-specific decisions made in building Debian.

>
> If adding an init script that restores the entropy pool is the solution, why hasn't it be done and why is it not automatically enforced?

Init scripts and indeed the init *architecture* is one of the thing that differs most among Linuxes.

You're asking a question that presumes there's some central authority, when none exists.

In some cases, like Mac OS X, there's a central authority. They can (and do) dictate security decisions about booting. However, there are still many rough edges here that are not limited to:

* What happens on restoring from hibernation, VM-level unfreezing, booting from a live CD, cloning a VM from a source image, etc.?

* What happens if an OS boots up, restores a saved random pool, and then immediately crashes? (Answer, exactly what you'd think happens, duh.)

The answer to these and other issues are software engineering decisions, they're implementation-dependent, and they require thought. We crypto people spent a lot of time telling the OS guys that they're not qualified to solve these problems and so they haven't. (Incidentally, I was an OS guy long before I was a crypto guy, and have been both since. I know what goes on in that sausage factory.)

Jon

Camille Vuillaume

unread,
Aug 29, 2012, 9:14:04 AM8/29/12
to randomness...@googlegroups.com
You are making valid points but I am not sure that I understand the
conclusion. Do you imply that this is a problem that should be tackled
separately by all unix flavors?
It seems to me that in the first place, the RSA key generation problem
occurred because people used /dev/urandom instead of /dev/random, and
this happened because /dev/random in the main branch of the linux kernel
is unusable. And I think that /dev/random could be more usable if:
- It had access to better random sources
- Or it made better use of the available randomness.
This is something that we can help with. And while we are at it, we
could also fix a few embarrassing details, such as:

> if (fips_enabled) {
> if (!memcmp(tmp, r->last_data, EXTRACT_SIZE))
> panic("Hardware RNG duplicated
> output!\n");

Or

> /*
> * In case the hash function has some recognizable output
> * pattern, we fold it in half. Thus, we always feed back
> * twice as much data as we output.
> */

Camille

A User from London

unread,
Nov 14, 2015, 3:49:51 PM11/14/15
to Randomness generation
Consider using a hardware device that measures radioactive decay to produce random numbers, e.g. connected via USB, if you feel you're waiting to long for entropy.
Reply all
Reply to author
Forward
0 new messages