Here are the promised notes of the randomness-generation BoF discussion
after the Wednesday coffee break at Crypto 2012.
Context: The Lenstra--Hughes--Augier--Bos--Kleinjung--Wachter paper
"Public keys", an update of the paper "Ron was wrong, Whit is right"
online, had been accepted to the main Crypto program and had been
presented just before the coffee break. Ernie Brickell had given an
invited talk just before lunch on "Recent advances and existing research
questions in platform security", among other things advertising Intel's
RDRAND randomness-generation instruction. Two weeks earlier the
Heninger--Durumeric--Wustrow--Halderman paper "Mining Your Ps and Qs:
Detection of Widespread Weak Keys in Network Devices" had been presented
at the USENIX Security Symposium and had been given the best-paper award
For many years Crypto had a free Tuesday afternoon, leaving time for
informal birds-of-a-feather (BoF) sessions; as
says, these sessions "provide an opportunity for people to gather around
and discuss common interests". In recent years the official Crypto
program has eliminated the free Tuesday afternoon, but this year the
rump-session chairs took over BoF organization and announced a
randomness-generation BoF on Wednesday 22 August 2012, 15:30--16:45,
outside Campbell Hall, at the same time as the official sessions "Secure
Computation II" followed by "Black-Box Separation".
About 25 people attended the randomness-generation BoF. The discussion
was quite active and had to be aborted at 16:50 for the start of the
IACR membership meeting; I promised to set up this randomness-generation
mailing list for continued discussions. The notes below reflect comments
made by Jon Callas, Niels Ferguson, Nadia Heninger, Jim Hughes, John
Kelsey, Tanja Lange, Hovav Shacham, Nicko van Someren, Phil Zimmermann,
and surely at least two or three other people; sorry for any missing
Intel's RDRAND approach drew several objections:
* Intel has its own health tests for its hardware entropy generator
but provides no way for users to carry out their own tests.
* Intel mixes its own entropy source into its entropy pool but
provides no way for users to test the mixer or to add their own
entropy sources. Presumably we want confidence that applications
are protected by, e.g., a separately generated preseed file even if
the hardware entropy generator breaks, _and_ confidence that
applications are protected by the hardware entropy generator even
if the preseed file is bad.
* Intel's RDRAND can at any moment suddenly begin failing and
continue failing indefinitely. Apparently this is just a
theoretical possibility, never observed yet, but even the
theoretical possibility is a step backwards.
There was also the usual objection to anything new: it's new and isn't
widespread; is it actually ever going to be widespread? Most CPUs, even
most Intel CPUs, don't have RDRAND. ARM is now defining an instruction
(coprocessor number etc.) for randomness-generation hardware, but this
doesn't mean that most ARM CPUs will add such hardware. It seems that
the systems most affected by randomness problems are the systems least
likely to have CPUs with anything like RDRAND; what should we do with
The rest of the discussion considered approaches that weren't
centralized in the CPU. The opposite extreme, randomness handled by the
application, was illustrated by a voice-over-IP application reading the
microphone as a source of entropy. This also drew several objections:
* Politics: It is perhaps politically unacceptable to read the
microphone. Imagine news stories saying "Your computer is secretly
listening to you!"
* Technology: Analog-to-digital conversion is widely respected as a
source of noise, but it's not always clear that what applications
see as a "microphone" (or any other device) is actually hooked up
to a converter. Of course, the typical voice-over-IP user does have
a working microphone, but maybe at this instant the user has hit an
OS-level kill switch.
* Even if this works for a voice-over-IP application, what do other
applications do? Is every application supposed to find its own
Some people argued that randomness generation should be centralized---
whether in the OS or hypervisor or CPU---and fixed at that central
location if it doesn't work properly. Other people argued that each
library and application should defend itself against failures of the
centralized mechanism. There actually seem to be three positions on this
* Applications should run their own PRNGs and should collect their
own entropy, viewing the OS randomness as just one entropy source.
* Applications should run their own PRNGs but should rely on the OS
to provide the initial random seed.
* Applications should rely on the OS not just for the initial seed
but also for the PRNG---i.e., should read each random number
directly from the OS.
Nobody seemed to object to adding microphones and cameras as entropy
sources in situations when the microphones and cameras were turned on
anyway. There was also some discussion of wireless interfaces as entropy
sources; the actual radio signal is usually hidden from software (thanks
to the FCC), but there's still some entropy in summaries such as the
signal strength of each access point. Apparently there is also a paper
on accelerometers as entropy sources.
Apparently the devices having the fewest entropy sources (visible to
software, anyway) are small wired routers, which are also most of the
targets of both of these papers. Treating MAC addresses as entropy
sources would have stopped both papers, but this would only hide the
problem without really fixing it: MAC addresses are visible to some
attackers and aren't actually terribly difficult to guess. Packet
arrival times aren't of much use on a quiet network, especially against
a nearby attacker, especially since the CPU clocks are rather slow and
predictable. There were some cost-related objections to preseed files on
these devices---I didn't understand the objections, so perhaps someone
else can elaborate.
There was some discussion of the /dev/random and /dev/urandom APIs. On a
device that has never seen much entropy, BSD blocks /dev/urandom, while
Linux relies on /dev/urandom not blocking (e.g., to initialize ASLR). It
isn't clear if Linux can be convinced to switch to the BSD approach.
There was also a recommendation for language-level randomness APIs.
There was also some discussion of whether cryptography is, or at least
can be, safe inside virtual machines cloned by users. Atomically reading
an entire secret key from a (properly seeded) hypervisor randomness
generator is safe; an application-managed PRNG is unsafe; deterministic
signature generation is safe; more complicated protocols might or might
not be safe.