Sep 2, 2021, 5:18:01 PMSep 2
I invented keyed pseudo random number generators. They can work on every block size. I'm testing mostly 128-bit version with 128-bit keys. They pass Dieharder and PractRand tests. At the moment, I have dropped the Dieharder tests due to their obsolescence and the fact that they are slow.
For week I was able to test 2-4 terabytes of data in PractRand, in Dieharder much less (it could be more than 2-4 terabytes, but it is an acceptable minimum).
But the main problem is - how to prove quality of keyed PRNG, when I'm abble to test only a few of the 2^128 keys? In fact, we face similar problems in the case of encryption algorithms. If a few keys encode well - how do we know that they will all be.
I would not count on a theory explaining the quality of these generators. There are some premises and justification, also I'm able to explain my design choices - but it not strictly guarantee that on average for any keys the quality of the generators will be good.
I think random keys should be tested (which I do). And then look for weak keys on purpose. If no one finds them, it can be concluded that generators are good for all keys.
Any ideas how to evaluate the quality of these generators?