Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[openssl-dev] DRBG entropy

117 views
Skip to first unread message

Leon Brits

unread,
Jul 27, 2016, 8:20:52 AM7/27/16
to

Hi all,

 

I have a chip (FDK RPG100) that generates randomness, but the SP800-90B python test suite indicated that the chip only provides 2.35 bits/byte of entropy. According to FIPS test lab the lowest value from all the tests are used as the entropy and 2 is too low. I must however make use of this chip.

 

Looking at the paragraph in the User Guide 2.0 where low entropy sources are discussed and have some additional questions:

1.     In my DRBG callback for entropy (function get_entropy in the guide), I simply used our chip as the source (the driver reading from the chip, makes it available at /dev/hwrng). Now that I’ve come to learn that the chip’s entropy is too low, how do I ensure that this callback exists with a buffer of acceptable entropy?

2.     Should I just return a 4 times larger buffer? Wat if that is larger than the “max_len”?

3.     Can the DRBG repeatedly call the callback until the entropy is high enough?

 

Your advice is appreciated

 

Regards

LJB

John Denker

unread,
Jul 27, 2016, 9:47:00 AM7/27/16
to
On 07/27/2016 05:13 AM, Leon Brits wrote:
>
> I have a chip (FDK RPG100) that generates randomness, but the
> SP800-90B python test suite indicated that the chip only provides
> 2.35 bits/byte of entropy. According to FIPS test lab the lowest
> value from all the tests are used as the entropy and 2 is too low. I
> must however make use of this chip.

That's a problem on several levels.

For starters, keep in mind the following maxim:
Testing can certainty show the absence of entropy.
Testing can never show the presence of entropy.

That is to say, you have ascertained that 2.35 bits/byte is an
/upper bound/ on the entropy density coming from the chip. If
you care about security, you need a lower bound. Despite what
FIPS might lead you to believe, you cannot obtain this from testing.
The only way to obtain it is by understanding how the chip works.
This might require a treeeemendous amount of effort and expertise.

================

Secondly, entropy is probably not even the correct concept. For any
given probability distribution P, i.e. for any given ensemble, there
are many measurable properties (i.e. functionals) you might look at.
Entropy is just one of them. It measures a certain /average/ property.
For cryptologic security, depending on your threat model, it is quite
possible that you ought to be looking at something else. It may help
to look at this in terms of the Rényi functionals:
H_0[P] = multiplicity = Hartley functional
H_1[P] = plain old entropy = Boltzmann functional
H_∞[P] = adamance

The entropy H_1 may be appropriate if the attacker needs to break
all messages, or a "typical" subset of messages. The adamance H_∞
may be more appropriate if there are many messages and the attacker
can win by breaking any one of them.

To say the same thing in other words:
-- A small multiplicity (H_0) guarantees the problem is easy for the attacker.
-- A large adamance (H_∞) guarantees the problem is hard for the attacker.

================

Now let us fast-forward and suppose, hypothetically, that you
have obtained a lower bound on what the chip produces.

One way to proceed is to use a hash function. For clarity, let's
pick SHA-256. Obtain from the chip not just 256 bits of adamance,
but 24 bits more than that, namely 280 bits. This arrives in the
form of a string of bytes, possibly hundreds of bytes. Run this
through the hash function. The output word is 32 bytes i.e. 256
bits of high-quality randomness. The key properties are:
a) There will be 255.99 bits of randomness per word, guaranteed
with high probability, more than high enough for all practical
purposes.
b) It will be computationally infeasible to locate or exploit
the missing 0.01 bit.

Note that it is not possible to obtain the full 256 bits of
randomness in a 256-bit word. Downstream applications must be
designed so that 255.99 is good enough.

========

As with all of crypto, this requires attention to detail. You
need to protect the hash inputs, outputs, and all intermediate
calculations. For example, you don't want such things to get
swapped out.
--
openssl-dev mailing list
To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev

Leon Brits

unread,
Jul 27, 2016, 11:23:55 AM7/27/16
to
John,

Thanks for your reply.

The SP800-90B test has different types of test but the test with the lowest output is used as the maximum entropy capability of the chip. That is how I understand it from the FIPS lab.

For the FIPS validation, using a NDRNG, that source must feed the DRBG directly (FIPS lab) and not from something like the PRNG. I use seed the /dev/random from the NDRNG and then source from the PRNG, but that is not allowed for DRBGs. Again I hope I understand them correct.

They said I must look at the OpenSSL user guide v2.0 para 6.1.1 where low entropy sources are discussed. Now, I already make use of the "get_entropy" function for my DRBG implementation. I use to source from the PRNG in that callback. I must now get it directly from my entropy source, which give rise to my question of how to ensure that I have high entropy of data before the callback exits.

Regards,
LJB

Paul Dale

unread,
Jul 27, 2016, 9:33:25 PM7/27/16
to
John's spot on the mark here. Testing gives a maximum entropy not a minimum. While a maximum is certainly useful, it isn't what you really need to guarantee your seeding.

A simple example which passes the NIST SP800-90B first draft tests with flying colours:

seed = π - 3
for i = 1 to n do
seed = frac(exp(1+2*seed))
entropy[i] = 256 * frac(2^20 * seed)

where frac is the fractional part function, exp is the exponential function.

I.e. start with the fractional part of the transcendental π and iterate with a simple exponential function. Take bits 21-28 of each iterate as a byte of "entropy". Clearly there is really zero entropy present: the formula is simple and deterministic; the floating point arithmetic operations will all be correctly rounded; the exponential is evaluated in a well behaved area of its curve where there will be minimal rounding concerns; the bits being extracted are nowhere near where any rounding would occur and any rounding errors will likely be deterministic anyway.

Yet this passes the SP800-90B (first draft) tests as IID with 7.89 bits of entropy per byte!

IID is a statistical term meaning independent and identically distributed which in turn means that each sample doesn't depend on any of the other samples (which is clearly incorrect) and that all samples are collected from the same distribution. The 7.89 bits of entropy per byte is pretty much as high as the NIST tests will ever say. According to the test suite, we've got an "almost perfect" entropy source.


There are other test suites if you've got sufficient data. The Dieharder suite is okay, however the TestU01 suite is most discerning I'm currently aware of. Still, neither will provide an entropy estimate for you. For either of these you will need a lot of data -- since you've got a hardware RNG, this shouldn't be a major issue. Avoid the "ent" program, it seems to overestimate the maximum entropy present.


John's suggestion of collecting additional "entropy" and running it through a cryptographic has function is probably the best you'll be able to achieve without a deep investigation. As for how much data to collect, be conservative. If the estimate of the maximum entropy is 2.35 bits per byte, round this down to 2 bits per byte, 1 bit per byte or even ½ bit per byte. The lower you go the more likely you are to be getting the entropy you want. The trade-off is the time for the hardware to generate the data and for the processor to hash it together.


Pauli
--
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption
Phone +61 7 3031 7217
Oracle Australia

================

================

========

Leon Brits

unread,
Jul 28, 2016, 3:40:53 AM7/28/16
to
Thanks for the helping me understand the whole entropy thing better. It is still get the feeling that this is a "best effort" thing and that nobody can actually proof what is correct. I am probably just bringing the math down to my level - sorry.

With that said for validation I still need to be sure that I give the required entropy back from the OpenSSL callback. Now since I am not allowed to use a hash with the DRBGs (FIPS lab and SP800-90B section 8.4), can you please confirm that, with a source of raw 2b/B entropy data, I need to return 4 times the data from the callback function?

Regards,

Leon Brits
System Engineer
Parsec

Work +27 12 678 9740 Cell +27 84 250 2855 Email le...@parsec.co.za
www.parsec.co.za/disclaimer

Hubert Kario

unread,
Jul 28, 2016, 6:51:50 AM7/28/16
to
On Wednesday, 27 July 2016 15:23:21 CEST Leon Brits wrote:
> John,
>
> Thanks for your reply.
>
> The SP800-90B test has different types of test but the test with the lowest
> output is used as the maximum entropy capability of the chip. That is how I
> understand it from the FIPS lab.
>
> For the FIPS validation, using a NDRNG, that source must feed the DRBG
> directly (FIPS lab) and not from something like the PRNG. I use seed the
> /dev/random from the NDRNG and then source from the PRNG, but that is not
> allowed for DRBGs. Again I hope I understand them correct.

but PRNG and DRBG is the same thing, both generate pseudo-random numbers from
a seed using (hopefully) a cryptographically secure algorithm

FIPS definitely allows you to use output of one DRBG to seed other DRBG

in the end, you should gather as much entropy as possible in the system, and
mix it all together and then use output of a DRBG that uses all that entropy
to seed other DRBGs

what that means in practical terms, is feed output from your NDRNG to kernel's
entropy pool and seed everything from /dev/urandom output (or getrandom())

--
Regards,
Hubert Kario
Senior Quality Engineer, QE BaseOS Security team
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic
signature.asc

Short, Todd

unread,
Jul 28, 2016, 11:21:23 AM7/28/16
to
See:


Section 4 suggests ways to de-skew.

--
-Todd Short
// "One if by land, two if by sea, three if by the Internet."

Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic--
signature.asc

John Denker

unread,
Jul 28, 2016, 12:09:08 PM7/28/16
to
Let's play a guessing game. I provide a hardware-based random number
generator of my choosing. It produces a stream of bytes. It has
an entropy density greater than 2.35 bits per byte. This claim is
consistent with all the usual tests, but it is also more than that;
it is not just "apparent" entropy or an upper bound based on testing,
but real honest-to-goodness Boltzmann entropy. The bytes are IID
(independent and identically distributed). The design and
implementation are open to inspection.

On each move in this game, I try to predict the exact value of the
next byte. Every time I succeed, you pay me a dollar; every time
I fail, I pay you a dollar. We play at least 100 moves, to minimize
stray fluctuations.

The point is, if you think entropy is a good measure of resistance
to guessing, then you should be eager to play this game, expecting
a huge profit.

Would anybody like to play?


On 07/28/2016 12:40 AM, Leon Brits wrote:
> Thanks for the helping me understand the whole entropy thing better.
> It is still get the feeling that this is a "best effort" thing and
> that nobody can actually proof what is correct. I am probably just
> bringing the math down to my level - sorry.
>
> With that said for validation I still need to be sure that I give the
> required entropy back from the OpenSSL callback. Now since I am not
> allowed to use a hash with the DRBGs (FIPS lab and SP800-90B section
> 8.4), can you please confirm that, with a source of raw 2b/B entropy
> data, I need to return 4 times the data from the callback function?

That depends on what the objective is. The objective is not
obvious, as discussed below.

> According to FIPS test lab the lowest value from all the tests are
> used as the entropy and 2 is too low.

1a) I assume the idea that "2 is too low" comes from the FIPS lab.

1b) I assume the designer's boss doesn't directly care about this,
so long as the FIPS lab is happy.

1c) This requirement has little if any connection to actual security.

> I must however make use of this chip.

2a) I assume the FIPS lab doesn't care exactly which chip is used.

2b) I assume this requirement comes from the boss.

2c) This requirement has little if any connection to actual security.

> I am not allowed to use a hash with the DRBGs (FIPS lab and
> SP800-90B section 8.4),

Where's That From? Section 8.4 says nothing about hashes. It's about
health testing. The hash doesn't interfere with health testing, unless
the implementation is badly screwed up.

Furthermore, in sections 8.2 and 8.3, and elsewhere, there is explicit
consideration of "conditioning", which is what we're talking about.

3a) Does this requirement really come from the FIPS lab? It
certainly doesn't come from SP800-90B as claimed.

3c) This requirement has nothing to do with actual security.

> It is still get the feeling that this is a "best effort" thing and
> that nobody can actually proof what is correct.

Where's That From?

Proofs are available, based on fundamental physics and math, delineating
what's possible and what's not.

> can you please confirm that, with a source of raw 2b/B entropy data,
> I need to return 4 times the data from the callback function?

Two answers:
-- My friend Dilbert says you should do that, in order to make the
pointy-haired boss happy.
-- You should not, however, imagine that it provides actual security.

> I have a chip (FDK RPG100) that generates randomness, but the
> SP800-90B python test suite indicated that the chip only provides
> 2.35 bits/byte of entropy

That means the chip design is broken in ways that the manufacturer
does not understand. The mfgr data indicates it "should" be much
better than that:
http://www.fdk.com/cyber-e/pdf/HM-RAE103.pdf

The mfgr has not analyzed the thing properly, and nobody else will
be able to analyze it at all. The block diagram in the datasheet
is a joke:
http://www.fdk.com/cyber-e/pdf/HM-RAE106.pdf#Page=9

> I must however make use of this chip.

My friend suggests you XOR the chip output with a decent, well-
understood HRNG. That way you can tell the pointy-haired boss
that you "make use of this chip".



------------

Bottom line: consider the contrast:
-- I'm seeing a bunch of feelings and made-up requirements.
-- I have not yet seen any sign of concern for actual security.

Under such conditions it is not possible to give meaningful advice
on how to proceed.

Kurt Roeckx

unread,
Jul 28, 2016, 6:31:34 PM7/28/16
to
On Wed, Jul 27, 2016 at 05:32:49PM -0700, Paul Dale wrote:
> John's spot on the mark here. Testing gives a maximum entropy not a minimum. While a maximum is certainly useful, it isn't what you really need to guarantee your seeding.

Fom what I've read, some of the non-IID tests actually underestimate
the actual entropy. Which is of course better the overestimating
it, but it's also annoying.

It will also never give a value higher than 6, since one of the
tests only uses 6 bits of the input.

> IID is a statistical term meaning independent and identically distributed which in turn means that each sample doesn't depend on any of the other samples (which is clearly incorrect)

You shouldn't run the IID tests when you clearly know it's not an
IID. If fact, if you're not sure it's an IID you should use the
non-IID tests.


Kurt

Paul Dale

unread,
Jul 28, 2016, 8:38:49 PM7/28/16
to
I probably should have mentioned this in my earlier message, but the exponential example is valid for the NSIT SP800-90B non-IID tests too: 5.74889 bits per byte of assessed entropy. Again about as good a result as the tests will ever produce given the ceiling of six on the output. There is still zero actual entropy in the data. The tests have massively over estimated.


Pauli
--
Oracle
Dr Paul Dale | Cryptographer | Network Security & Encryption
Phone +61 7 3031 7217
Oracle Australia


-----Original Message-----
From: Kurt Roeckx [mailto:ku...@roeckx.be]
Sent: Friday, 29 July 2016 8:31 AM
To: opens...@openssl.org
Subject: Re: [openssl-dev] DRBG entropy

Leon Brits

unread,
Jul 29, 2016, 3:37:07 AM7/29/16
to
Paul,

> I probably should have mentioned this in my earlier message, but the
> exponential example is valid for the NSIT SP800-90B non-IID tests too:
> 5.74889 bits per byte of assessed entropy. Again about as good a result
> as the tests will ever produce given the ceiling of six on the output.
> There is still zero actual entropy in the data. The tests have massively
> over estimated.

I am just trying to get our device validated, and am not in a position to discuss the Labs test methodologies or the intrinsic math behind it. I thought that by using the NDRNG chip the entropy would not be a problem. Should have done my homework better.

Thanks for your time
LJB

Leon Brits

unread,
Jul 29, 2016, 4:00:53 AM7/29/16
to
John,

> Let's play a guessing game. I provide a hardware-based random number
> generator of my choosing. It produces a stream of bytes. It has an
> entropy density greater than 2.35 bits per byte. This claim is consistent
> with all the usual tests, but it is also more than that; it is not just
> "apparent" entropy or an upper bound based on testing, but real honest-to-
> goodness Boltzmann entropy. The bytes are IID (independent and
> identically distributed). The design and implementation are open to
> inspection.
>
> On each move in this game, I try to predict the exact value of the next
> byte. Every time I succeed, you pay me a dollar; every time I fail, I pay
> you a dollar. We play at least 100 moves, to minimize stray fluctuations.
>
> The point is, if you think entropy is a good measure of resistance to
> guessing, then you should be eager to play this game, expecting a huge
> profit.
>
> Would anybody like to play?

Brilliant analogy!

> 1a) I assume the idea that "2 is too low" comes from the FIPS lab.

Yes

> 1b) I assume the designer's boss doesn't directly care about this,
> so long as the FIPS lab is happy.

Yes

> 1c) This requirement has little if any connection to actual security.

Correct - it's about perception of the client

> 2a) I assume the FIPS lab doesn't care exactly which chip is used.

They did request information about its working which FDK where not willing to divulge.

> 2b) I assume this requirement comes from the boss.

Correct

> 2c) This requirement has little if any connection to actual security.

Correct

> > I am not allowed to use a hash with the DRBGs (FIPS lab and SP800-90B
> > section 8.4),
>
> Where's That From? Section 8.4 says nothing about hashes. It's about
> health testing. The hash doesn't interfere with health testing, unless
> the implementation is badly screwed up.
>
> Furthermore, in sections 8.2 and 8.3, and elsewhere, there is explicit
> consideration of "conditioning", which is what we're talking about.
>
> 3a) Does this requirement really come from the FIPS lab? It
> certainly doesn't come from SP800-90B as claimed.

I will ask them the question.

> 3c) This requirement has nothing to do with actual security.
>
> > It is still get the feeling that this is a "best effort" thing and
> > that nobody can actually proof what is correct.
>
> Where's That From?

My opinion (after working on this project)

> Two answers:
> -- My friend Dilbert says you should do that, in order to make the
> pointy-haired boss happy.

Wise old Dilbert

> -- You should not, however, imagine that it provides actual security.

Understood.

> That means the chip design is broken in ways that the manufacturer does
> not understand. The mfgr data indicates it "should" be much better than
> that:
> http://www.fdk.com/cyber-e/pdf/HM-RAE103.pdf

Agreed. That is why it was selected (from what I heard).

> The mfgr has not analyzed the thing properly, and nobody else will be able
> to analyze it at all. The block diagram in the datasheet is a joke:
> http://www.fdk.com/cyber-e/pdf/HM-RAE106.pdf#Page=9
>
> > I must however make use of this chip.
>
> My friend suggests you XOR the chip output with a decent, well- understood
> HRNG. That way you can tell the pointy-haired boss that you "make use of
> this chip".

I wish I could, but my only option to solve this now is using software.

Thanks alot for your reply and time taken to help.
Much appreciated

John Denker

unread,
Jul 29, 2016, 1:48:49 PM7/29/16
to
In the context of:

>> I have a chip (FDK RPG100) that generates randomness, but the
>> SP800-90B python test suite indicated that the chip only provides
>> 2.35 bits/byte of entropy

On 07/28/2016 09:08 AM, I wrote:

> That means the chip design is broken in ways that the manufacturer
> does not understand. The mfgr data indicates it "should" be much
> better than that:
> http://www.fdk.com/cyber-e/pdf/HM-RAE103.pdf

To be scientific, we must consider all the relevant hypotheses.

1) For starters, let's consider the possibility that the python
test suite is broken. Apply the test suite to a sufficiently
random stream.
-- An encrypted counter should be good enough.
-- /dev/urandom is not a paragon of virtue, but it should be
good enough for this limited purpose.

1a) If the test suite reports a low randomness for the truly random
stream, then the test is broken. Find a better test suite and
start over from Square One.

1b) If the test suite reports a high randomness for the random stream
but a low randomness for the chip, the chip is broken and cannot be
trusted for any serious purpose.
-- You could derate it by another factor of 10 (down to 0.2
bits per byte) and I still wouldn't trust it. A stopped
clock tells the correct time twice a day, but even so, you
should not use it for seeding your PRNG.
-- It must be emphasized yet again that for security you
need a lower bound on the randomness of the source.
Testing cannot provide this. A good test provides an upper
bound. A bad test tells you nothing. In any case, testing
does not provide what you need. Insofar as the chip passes
some tests but not others, that should be sufficient to prove
and illustrate the point.

Seriously, if the FIPS lab accepts the broken chip for any
purpose, with or without software postprocesing, then you
have *two* problems: A chip that cannot be trusted and a
lab that cannot be trusted.


2a) We must consider the possibility that the bare chip
hardware is OK, but there is a board-level fault, e.g.
wrong voltage, wrong readout timing, or whatever.

2b) Similarly there is the possibility that the bare chip
hardware is OK but the data is being mishandled by the
system-level driver software. This should be relatively
easy to fix.

===========

It must be emphasized yet again the entropy (p log 1/p) is
probably not the thing you care about anyway. If the entropy
density is high (nearly 8 bits per byte) *and* you understand
why it is not higher, you may be able to calculate something
you can trust ... but let's not get ahead of ourselves.

Kurt Roeckx

unread,
Jul 29, 2016, 5:51:38 PM7/29/16
to
On Thu, Jul 28, 2016 at 03:40:38PM -0700, Paul Dale wrote:
> I probably should have mentioned this in my earlier message, but the exponential example is valid for the NSIT SP800-90B non-IID tests too: 5.74889 bits per byte of assessed entropy. Again about as good a result as the tests will ever produce given the ceiling of six on the output. There is still zero actual entropy in the data. The tests have massively over estimated.

Tests are never perfect. There are various things you can do that
will let the result in giving a higher entropy estimate that what
it really has. You should really know what your testing and input
something from a real noise source.

Some examples of things that will probably give a very high score:
- Any complex sequence of numbers, including things like pi, e,
the output of a PRNG, ...
- Add 1 bit of entropy (or even less) into a hash function for
every byte that you pull out of it.

I think it's important to have a model of your noise source and
check that the noise you get actually matches that model. This
model should also include the expected entropy you get out of your
noise source.


Kurt

Kurt Roeckx

unread,
Jul 29, 2016, 6:19:41 PM7/29/16
to
On Thu, Jul 28, 2016 at 09:08:32AM -0700, John Denker wrote:
>
> That means the chip design is broken in ways that the manufacturer
> does not understand. The mfgr data indicates it "should" be much
> better than that:
> http://www.fdk.com/cyber-e/pdf/HM-RAE103.pdf

Reading that, you don't seem to have access to the raw entropy
and the tests you are doing are meaningless. It really should
always give you a perfect score since it should already be at
least whitened.

I have a feeling that there is at least a misunderstanding of what
that draft standard is saying and that it's isn't being followed.

But if the tests still give you such a low score something seems
to be wrong, which might either be the hardware or software.

Have you tried running NIST's software
(https://github.com/usnistgov/SP800-90B_EntropyAssessment)
yourself? Can you run it in verbose mode and give the results of
all the tests it ran?

Leon Brits

unread,
Aug 1, 2016, 5:18:31 AM8/1/16
to
Kurt,

> -----Original Message-----
> From: openssl-dev [mailto:openssl-d...@openssl.org] On Behalf Of
> Kurt Roeckx
> Sent: 30 July 2016 12:19 AM
> To: opens...@openssl.org
> Subject: Re: [openssl-dev] DRBG entropy

> Have you tried running NIST's software
> (https://github.com/usnistgov/SP800-90B_EntropyAssessment)
> yourself? Can you run it in verbose mode and give the results of all the
> tests it ran?

Yes, this is the test that indicated an entropy of 2b/B. I ran the test on 1M and 4M and the result was 2.19 and 2.35 respectively. The 4MB file test output is appended below.
Now in the OpenSSL UG2.0 section 6.1.1 a paragraph states:
"Now suppose we have a low grade entropy source which provides just 1 bit of entropy per byte. Again assume it is uniform (e.g. we don't get 8 bits of entropy in byte 1 and nothing in the next 7). Again let's have a block size of 16 bytes. This time to get 256 bits of entropy the source must provide it in a 256 byte buffer. An extra block is required which makes 272 bytes but because we only have 1 bit of entropy per byte it just needs to supply 272 bits of entropy."

Am I correct to state that for a tested entropy source of 2b/B and the same assumptions as in the paragraph, I need to return 8 blocks of 16B each in my get_entropy() callback?

Thanks
LJB

******************************************************
Read in file randomness.bin, 4194304 bytes long.
Dataset: 4194304 8-bit symbols, 256 symbols in alphabet.
Output symbol values: min = 0, max = 255

Running entropic statistic estimates:
- Most Common Value Estimate: p(max) = 0.00411016, min-entropy = 7.92659
- Collision Estimate: p(max) = 0.00873199, min-entropy = 6.83947
- Markov Estimate (map 6 bits): p(max) = 9.71537e-228, min-entropy = 5.89156
- Compression Estimate: p(max) = 0.00743246, min-entropy = 7.07194
- t-Tuple Estimate: p(max) = 0.00495551, min-entropy = 7.65675
- LRS Estimate: p(max) = 0.155747, min-entropy = 2.68272

Running predictor estimates:
Computing MultiMCW Prediction Estimate: 99 percent complete
Pglobal: 0.003997
Plocal: 0.001358
MultiMCW Prediction Estimate: p(max) = 0.00399729, min-entropy = 7.96676

Computing Lag Prediction Estimate: 99 percent complete
Pglobal: 0.004009
Plocal: 0.001358
Lag Prediction Estimate: p(max) = 0.00400879, min-entropy = 7.96262

Computing MultiMMC Prediction Estimate: 99 percent complete
Pglobal: 0.004934
Plocal: 0.195448
MultiMMC Prediction Estimate: p(max) = 0.195448, min-entropy = 2.35514

Computing LZ78Y Prediction Estimate: 99 percent complete
Pglobal: 0.004034
Plocal: 0.195448
LZ78Y Prediction Estimate: p(max) = 0.195448, min-entropy = 2.35514
-----------------------
min-entropy = 2.35514

John Denker

unread,
Aug 1, 2016, 9:34:42 AM8/1/16
to
On 08/01/2016 02:17 AM, Leon Brits wrote:

> Am I correct to state that for a tested entropy source of 2b/B and
> the same assumptions as in the paragraph, I need to return 8 blocks
> of 16B each in my get_entropy() callback?

No, that is not correct, for the reasons previously explained.

> Again assume it is uniform (e.g. we don't get 8 bits of entropy in byte 1 and nothing in the next 7).

That assumption is invalid, if we believe the LRS test.
Quoting from LRS.py:

>> # Length of the Longest Repeated Substring Test - Section 5.2.5
>> # This test checks the IID assumption using the length of the longest repeated
>> # substring. If this length is significantly longer than the expected value,
>> # then the test invalidates the IID assumption.

Accumulating 8 or more blocks might make sense if the data were IID,
but it isn't. Either that or the LRS test itself is broken, which
is a possibility that cannot be ruled out. By way of analogy, note
that the p(max) reported by the Markov test is clearly impossible
and inconsistent with the reported min-entropy.

Suggestion: Modify LRS.py to print (in hex) the longest repeated
substring. Then verify by hand that the string really does recur
in the data.
-- If it doesn't, then the test is broken.
-- If it does, then either the chip is broken or you're using it wrong.

Remind your boss that the whole point of the certification process is to
make sure that broken hardware doesn't get certified.

Also:
*) Please stop using "entropy" as a synonym for randomness. Some things
have very little entropy but are still random enough for a wide range
of purposes. Meanwhile other things have large entropy but are not
random enough.
*) Please stop using "entropy" as a synonym for "min-entropy". The
latter is a two-word idiomatic expression. A titmouse is not a mouse.
Buckwheat is not a form of wheat. The Holy Roman Empire was neither
holy, nor Roman, nor an empire.

Just because openssl is sloppy about this doesn't make it OK.
0 new messages