> Or should there be something in between /dev/random and /dev/urandom,
> like a not-so-blocking randomness source?
Yes, I think so. /dev/random differs from /dev/urandom in two ways that
shouldn't be tied together:
* Useful feature: /dev/random blocks if it has _never_ seen enough
(estimated) entropy. This is useful because it protects against a
very common type of misconfiguration of small devices.
What we really want, of course, is for the devices to always be
configured correctly. But if someone screws this up then blocking
is a much better response than producing non-random results.
(Of course, a sufficiently severe screwup---e.g., initializing many
devices with the _same_ random seed file---will still break this
feature. The core problem to address is the configuration error.)
* Superstitious nonsense: /dev/random blocks if it isn't _continuing_
to see a bit of new entropy to cover each bit of output.
The Linux /dev/urandom manual page promotes this nonsense: "the
returned values are theoretically vulnerable to a cryptographic
attack on the algorithms used by the driver". It's crazy to worry
about someone breaking SHA-512 as an entropy mixer or (bitsliced)
AES-256-CTR as an output-and-state-update mechanism; if we can't
even get such easy things right then there's no hope for securing
encryption, signatures, etc.
What cryptographic libraries actually want is the useful feature without
the superstitious nonsense. There are several ways that operating
systems could make this available:
* Upgrade /dev/urandom to add the useful feature. This protects all
the existing /dev/urandom applications, but it also creates a
portability problem, an important ambiguity in /dev/urandom---the
library writer doesn't _know_ /dev/urandom has been upgraded. The
only way for the library writer to be confident about the useful
feature is to fall back to /dev/random---which is a performance
* Add a new device, say /dev/crandom, that has the useful feature
without the superstitious nonsense. The library writer can then
reasonably insist on using /dev/crandom. Upgrading will take a few
years but the result of the upgrade will actually work as desired.
* Upgrade /dev/random to remove the superstitious nonsense. I don't
see how this will allow easier deployment than adding /dev/crandom.
What I'd actually like to see isn't exactly /dev/crandom, but rather a
cryptorandom(unsigned char *data,size_t datalen);
syscall that never fails---merely blocks if it has never seen enough
entropy. /dev/*random will fail if the file table fills up, whether by
accident or by a denial-of-service attack; I think this is a really
stupid failure mode for something as fundamental as obtaining random
bytes from the OS.