MWC RNGs are ordinarily based on primes of the form
p=ab^r-1, with b=2^32 or 2^64 and a<b. They generate,
32- or 64-bits at a time, and in reverse order, the
binary expansions of rationals j/p, 0<j<p and the
period of that binary expansion is the order of 2 mod p.
The key part of the MWC process is: given 32-bit a,x,c,
extract the top and bottom 32 bits of a 64 bit a*x+c,
or, for given 64-bit a,x,c, extract the top and bottom
64 bits of a 128-bit a*x+c.
I have put various computers to work establishing the
order of 2 for MWC primes of the form ab^r-1, b=2^32,
or CMWC (Complimentary-Multiply-With-Carry), p=ab^r+1.
For example, it took 24 days on an early laptop to
establish that the order of 2 for p=54767*2^1337287+1
is (p-1)/2^3, and just last week I put a Samsung 64-bit
laptop to finding the order of 2 for the 11th largest
known prime, p=27653*2^9167433+1; in CMWC form,
p=14158336*b^286482+1. I expect it may take months.
Such searches are motivated by more than a Mt. Everest
syndrome, as extremely-long-period RNGs not only seem to
perform very well on tests of randomness, but have uses
for cryptography. For example, suppose, from a random
set of seeds, you have observed 100000 successive 32-bit
numbers from a CMWC RNG based on p=14158336*b^286482+1.
You will then be somewhere in an immense loop that has
over 10^(2million) appearances of exactly that same
sequence of 100000, leaving you virtually no chance of
finding your particular location and thus able to
predict forthcoming values.
Of course, if you can observe 286482 successive values,
then, after a little modular arithmetic to determine
the carry, you are OK. It is ensuring a large exponent
for b that makes the RNG desirable---the larger the better.
But in seeking large exponents for b and record periods,
why bother with an Everest when there are ranges with
peaks far beyond surveyor's skills but certain
to be several times as tall?
Such ranges come from considering, for very large r,
composites of the form m=ab^r-1, rather than primes of
that form. The example for this posting: the composite
m=(2^28-1)*2^(2^27)-1=(2^28-1)*b^(2^22)-1.
m has no prime divisors less than 21000000, and order(2,m)
is the lcm of the orders mod the prime divisors of m.
We cannot---and may never be able to--factor such an
m>10^40403570, but for primes q, the average value of
order(2,q)/q is around .572. Thus, even if m has, say,
40 prime factors, since (.572)^40>10^(-10), we can only
expect to "lose" around ten 10's, and perhaps ten more
through common factors for the lcm, providing an estimate
around 10^40403550 for the order of 2 mod m.
That the order of 2 mod m is less than 10^(40million)
seems extremely unlikely; we would have to "lose" over
400thousand 10's, when something in the range
of three to sixty 10's seems more likely.
So users should feel confident that the periods of the
following MWC RNGs far exceed 10^(40million). Based on
m=(2^28-1)b^(2^22)-1=(2^28-1)b^4194304-1 with b=2^32,
m=(2^28-1)B^(2^21)-1=(2^28-1)B^2091752-1 with B=2^64,
they are simple and very fast.
You would have to observe more than 134 million
successive bits produced be these MWC RNGs
in order to get close to cracking them.
Note that choosing a=2^i-1 in m=ab^r-1 makes finding
the top and bottom 32 bits of a purported 64 bit
a*x+c feasible using only 32-bit arithmetic,
or, within 64-bit arithmetic, finding the top
and bottom 64 bits of a purported 128 bit a*x+c.
Here are C versions using, for 32-bit integers,
an unsigned long array Q[41943104], and for 64-bits,
an unsigned long long array Q[2091752].
Try them and see if you get the same results I do.
--------------------32-bit MWC version-------------------
#include <stdio.h>
static unsigned long Q[4194304],carry=0;
unsigned long b32MWC(void)
{unsigned long t,x; static int j=4194303;
j=(j+1)&4194303;
x=Q[j]; t=(x<<28)+carry;
carry=(x>>4)-(t<x);
return (Q[j]=t-x);
}
#define CNG ( cng=69069*cng+13579 )
#define XS ( xs^=(xs<<13), xs^=(xs>>17), xs^=(xs<<5) )
#define KISS ( b32MWC()+CNG+XS )
int main(void)
{unsigned long int i,x,cng=123456789,xs=362436069;
/* First seed Q[] with CNG+XS: */
for(i=0;i<4194304;i++) Q[i]=CNG+XS;
/* Then generate 10^9 b32MWC()s */
for(i=0;i<1000000000;i++) x=b32MWC();
printf("Does x=2769813733 ?\n x=%lu\n",x);
/* followed by 10^9 KISSes: */
for(i=0;i<1000000000;i++) x=KISS;
printf("Does x=3545999299 ?\n x=%lu\n",x);
return 0;
}
---------------- 64-bit MWC version ---------------------
#include <stdio.h>
static unsigned long long Q[2097152],carry=0;
unsigned long long B64MWC(void)
{unsigned long long t,x; static int j=2097151;
j=(j+1)&2097151;
x=Q[j]; t=(x<<28)+carry;
carry=(x>>36)-(t<x);
return (Q[j]=t-x);
}
#define CNG ( cng=6906969069LL*cng+13579 )
#define XS ( xs^=(xs<<13), xs^=(xs>>17), xs^=(xs<<43) )
#define KISS ( B64MWC()+CNG+XS )
int main(void)
{unsigned long long i,x,cng=123456789987654321LL,
xs=362436069362436069LL;
/* First seed Q[] with CNG+XS: */
for(i=0;i<2097152;i++) Q[i]=CNG+XS;
/* Then generate 10^9 B64MWC()s */
for(i=0;i<1000000000;i++) x=B64MWC();
printf("Does x=13596816608992115578 ?\n x=%llu\n",x);
/* followed by 10^9 KISSes: */
for(i=0;i<1000000000;i++) x=KISS;
printf("Does x=5033346742750153761 ?\n x=%llu\n",x);
return 0;
}
---------------------------------------------------------
Note that I advocate the KISS, (Keep-It-Simple-Stupid),
principle. Even though the MWC RNGs perform very well on
tests of randomness, combining with Congruential (CNG)
and Xorshift (XS) probably provides extra insurance
at the cost of a few simple instructions, (and making
the resulting KISS even harder to crack).
Also note that for unsigned integers such as in Fortran,
the C instruction -(t<x) means: subtract 1 if t<s,
and can be implemented for signed integers as t'<s',
where the ' means: flip the sign bit.
Full random seeding of the carry and the Q[] array
calls for more than 17 megabytes of random bits.
For ordinary randomness testing, filling, as above,
the Q[] array with Congruential+Xorshift may serve, but
for stronger crypto uses, many more than the random 64-
or 128-bits that CNG and XS require are needed.
Random seeding of the carry and the Q[] array amounts to
producing, either 32- or 64-bits at a time, the reverse-
order binary expansion of j/m for some 0<=j<=m. Indeed,
for m=ab^r-1, given the carry and the Q[] array, that j is
j=carry+a*sum(Q[i]*b^i),i=0..r-1.
There are phi(m) such j's with period order(2,m).
Since m has no prime factors less than 21000000, with
probability close to 1 a random seeding of Q[] should
produce a j that is relatively prime to m and thus
produce the full period---unknown but almost certainly
far in excess of 10^(40million). Even if random
seeding of carry and Q[] provides a j with gcd(j,m)>1,
the period is still likely to exceed 10^(40million).
There are two exceptions:
All Q[i]=0, carry=0 yields j=0 and the binary expansion
0/m=.00000000000000000...
All Q[i]=b-1=2^32-1 and carry=a-1=(2^28-2) yields j=m
and the binary expansion
m/m=.11111111111111111...
Almost any choice of m=(2^i-1)*2^(2^27)-1 is likely
to provide an immense order for 2 mod m.
Choices i=16,18,22,28 resulted in m's that have no
factors<10^7. Memory seems to be the key restriction
for the resulting Q[] array. With enough memory,
one might seek i's for which m=(2^i-1)*2^(2^28)-1
has no small factors, or even m=(2^i-1)*2^(2^29)-1,
boosting the unknown and unknowable periods of
MWC RNGs to ranges beyond 10^(40million).
George Marsaglia
[...]
With the GCC compiler, adding an optimization flag
such as -O2 or -O3 brought the time down from 29 seconds
to about 11 seconds on an Intel Core 2 Quad Q6600 at 2.4 GHz.
The best compilation options for me seem to be
with
gcc -O3 -march=x86-64 , the second flag indicating
the processor architecture.
The seeding time is negligible. Both the
B64MWC() and KISS values match what it should be.
In a modified program, I computed 11 billion KISS
values. The time difference with 1 billion is
61 seconds, for 10 billion more KISS values.
That results in almost 164,000,000 KISS values
computed per second.
If I'm not mistaken, with the 64-bit version,
each KISS value amounts to 64 pseudo-random bits ...
2^64 - 1 in hex should be: 0xffff ffff ffff ffff
or 0xFFFFFFFFFFFFFFFF . A mask for the low 32 bits
would be: 0x00000000FFFFFFFF , i.e
(x & 0x00000000FFFFFFFF) would select for the low 32 bits.
And (x & 0xFFFFFFFF00000000 )>>32 should select for
the high 32-bits.
====
I don't really understand what's going on in the
B64MWC function:
unsigned long long B64MWC(void)
{unsigned long long t,x; static int j=2097151;
j=(j+1)&2097151;
x=Q[j]; t=(x<<28)+carry;
carry=(x>>36)-(t<x);
return (Q[j]=t-x);
}
Arithmetically, what is happening to the array Q[] ?
David Bernier
------ results of my tests, etc. below ----------------
$ gcc -O3 -march=x86-64 test04a.c
$ time ./a.out
seeding of the array Q[] done ...
10^9 B64MWC()s done ... test of output on next line ...
Does x=13596816608992115578 ?
x=13596816608992115578
10^9 KISS values done ... test of output on next line ...
Does x=5033346742750153761 ?
x=5033346742750153761
real 0m10.632s
user 0m10.585s
sys 0m0.017s
============================================================
$ gcc -O3 -march=x86-64 test05a.c
$ time ./a.out
seeding of the array Q[] done ...
10^9 B64MWC()s done ... test of output on next line ...
Does x=13596816608992115578 ?
x=13596816608992115578
11 billion KISS values done ...
Does x=5033346742750153761 ?
x=4752557329978035295
real 1m11.451s
user 1m11.163s
sys 0m0.028s
--
120039822036992245303534619191166796374
I'm thinking of using each of the 64 bits in one KISS
value as 64 coin tosses. Then, with 16 KISS values, we have
1024 bits, and if we count the number of '1' bits,
it should resemble a binomial distribution with
n = 1024, p = 1/2.
< http://en.wikipedia.org/wiki/Binomial_distribution > .
If y is an unsigned long long, I could do something like
unsigned long long count;
int shortcount;
count = y&1LL;
count += (y&2LL)>>1;
count += (y&4LL)>>2;
count += (y&8LL)>>3;
...
count+= (y& 9223372036854775808LL)>>63;
shortcount = (int) count;
array[shortcount]++; // array of length 1025 .
Would that be an efficient way of counting the number of
'1' bits in a 64-bit unsigned long long ?
David Bernier
Precompute_16bit seems to be recommended:
http://gurmeet.net/2008/08/05/fast-bit-counting-routines/
As it is, I don't really understand the > or the & ...
David Bernier
--
120039822036992245303534619191166796374
The string ">" is the HTML encoding for the ">" symbol, and "&"
encodes "&". Usually this would be invisible to you as the browser
would interpret and render them correctly, but someone involved in
that web page has messed up: the program was encoded *twice*.
--
Tim
> With the GCC compiler, adding an optimization flag
> such as -O2 or -O3 brought the time down from 29 seconds
> to about 11 seconds on an Intel Core 2 Quad Q6600 at 2.4 GHz.
>
> The best compilation options for me seem to be
> with
> gcc -O3 -march=x86-64 , the second flag indicating
> the processor architecture.
You should try gcc -O3 -march=core2
cf. http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86_002d64-Options.html
Regards.
declare Q(0:4194303) unsigned fixed binary (32),
carry unsigned fixed binary (32) static initial (0);
b32MWC: procedure () returns (unsigned fixed binary (32));
declare (t,x) unsigned fixed binary (32);
declare j fixed binary (31) static initial (4194303);
j=iand(j+1, 4194303);
x=Q(j); t=isll(x, 28)+carry;
carry=isrl(x, 4)-(t<x);
Q(j)=t-x;
return (Q(j));
end b32MWC;
KISS: procedure() returns (unsigned fixed binary (32));
declare t unsigned fixed binary (32);
t = b32MWC();
cng = bin(69069)*cng+(13579);
xs = ieor(xs, isll(xs, 13));
xs = ieor(xs, isrl(xs, 17));
xs = ieor(xs, isll(xs, 5));
return (t + cng + xs);
end KISS;
declare (i,x,cng static initial (123456789),
xs static initial (362436069)) unsigned fixed binary (32);
declare (start_time, finish_time) float (18);
start_time = secs();
/* First seed Q with CNG+XS: */
do i=0 to 4194304-1;
xs = ieor(xs, isll(xs, 13));
xs = ieor(xs, isrl(xs, 17));
xs = ieor(xs, isll(xs, 5));
cng = bin(69069)*cng+bin(13579);
Q(i)=cng+xs;
end;
/* Then generate 10^9 b32MWC()s */
do i=1 to 1000000000; x=b32MWC(); end;
if decimal(x) ^= 2769813733 then
put skip list ('b32MWC: x is not the expected value 2769813733', ' x=', x);
/* followed by 10^9 KISSes: */
do i=1 to 1000000000; x=KISS(); end;
if decimal(x)^=3545999299 then
put skip list ('KISS: x is not the expected value 3545999299', ' x=', x);
Finish_time = secs();
put skip edit (Finish_time - Start_time, ' secs') (F(10,2), A);
end rng;
______________
"geo" <gmars...@gmail.com> wrote in message news:603ebe15-a32f-4fbb...@t35g2000yqj.googlegroups.com...
I wrote a paper cryptanalyzing the original KISS,
available at http://eprint.iacr.org/2011/007 .
(I emailed it to your FSU address some months ago,
don't know if you got that or not.) In terms of
cryptanalysis, the MWC is definitely the hard nut
to crack. This new generator might be practically
secure (in the sense that it is impractical to
recover the huge state), but it is definitely
not cryptographically practical (because of the
huge state!) nor cryptographically secure (since
the work to recover that huge state is certainly
much less than enumeration of the state -- that is,
there are attacks significantly better than brute
force).
I greatly admire your corpus of work, but please
don't tout these generators as being useful
for cryptography.
Greg.
--
There has been at least two (maybe four) KISS
generators since the 1993 version(s) that you analyzed,
having significantly longer tables and periods than
that one.
Before you make claims like and "certainly much less"
and "might be", wouldn't it be better to have actually done
such an analysis on the two new generators?
> There has been at least two (maybe four) KISS
> generators since the 1993 version(s) that you analyzed,
> having significantly longer tables and periods than
> that one.
>
> Before you make claims like and "certainly much less"
> and "might be", wouldn't it be better to have actually done
> such an analysis on the two new generators?
Sorry, but for claiming that KISS is broken it's enough to break one
KISS. The existence of many equally named PRNGs is surely not Greg's
fault. I really like Marsaglia's work, but I'd strongly prefer him to
at least number his kisses.
Here is the 64-bit version using PL/I :--
/* 64-bit version in PL/I using unsigned arithmetic 18/1/2011 */
(nosize):
rng64: procedure options (main);
declare (Q(0:2097151), carry initial (0)) unsigned fixed binary (64)
static;
B64MWC: procedure () returns (unsigned fixed binary (64));
declare (t,x) unsigned fixed binary (64);
declare j fixed binary (31) static initial (2097151);
j=iand(j+1, 2097151);
x=Q(j); t=isll(x, 28)+carry;
carry=isrl(x, 36)-(t<x);
Q(j)=t-x;
return (Q(j));
end B64MWC;
KISS: procedure() returns (unsigned fixed binary (64));
declare t unsigned fixed binary (64);
t = B64MWC();
cng=unsigned(6906969069, 64)*cng+unsigned(13579);
xs = ieor(xs, isll(xs, 13));
xs = ieor(xs, isrl(xs, 17));
xs = ieor(xs, isll(xs, 43));
return (t + cng + xs);
end KISS;
declare (i,x,cng static initial (123456789987654321),
xs static initial (362436069362436069))
unsigned fixed binary (64);
declare (start_time, finish_time) float (18);
start_time = secs();
/* First seed Q with CNG+XS: */
do i=0 to 2097151;
xs = ieor(xs, isll(xs, 13));
xs = ieor(xs, isrl(xs, 17));
xs = ieor(xs, isll(xs, 43));
cng=unsigned(6906969069, 64)*cng+unsigned(13579);
Q(i)=CNG+XS;
end;
/* Then generate 10^9 B64MWC()s */
do i=1 to 1000000000; x=B64MWC(); end;
put skip list (x);
if decimal(x) ^= 13596816608992115578 then
put list ('is not the expected value 13596816608992115578');
/* followed by 10^9 KISSes: */
do i=1 to 1000000000; x=KISS(); end;
put skip list (x);
if decimal(x) ^= 5033346742750153761 then
put list ('is not the expected value 5033346742750153761');
Finish_time = secs();
put skip edit (Finish_time - Start_time, ' secs') (F(10,2), A);
end rng64;
Well that, and as Greg pointed out, stop toting them as suitable for
cryptography. Even if they were secure the large state required is
next to impossible to manage properly, so you'd need to seed the state
from a smaller seed somehow [key expansion of some sort].
Thing is George is not a cryptographer. He should stick to comp.sci
and mathematics.
Tom
The previous versions have much smaller state, actually maybe too
small for crypto (<128 bits for the 32 bit version). I really do not
consider using it for crypto, I just like the idea. What I don't like
is the lack of seeding and the missing partitioning to setup, working
code, and test. As a simple exercise I implemented it as a subclass of
java.utilRandom, s. http://dl.dropbox.com/u/4971686/110119/Kiss1101.java
> There has been at least two (maybe four) KISS
> generators since the 1993 version(s) that you analyzed,
> having significantly longer tables and periods than
> that one.
>
> Before you make claims like and "certainly much less"
> and "might be", wouldn't it be better to have actually done
> such an analysis on the two new generators?
It seems a RNG is considered "cryptographically broken" if the
sequence can be cracked substantially faster than with brute force. In
the generator that was examined (period about 2^120, cracked in about
2^29 operations) this actually leads to a crack. But even with the two
RNGs of a few months ago with periods of N = 10^40000 or so, even if
you can crack them in N^(1/1000) operations, they are still absolutely
safe.
However, is this attack so much better as to be
feasible to carry out with technology we have or
are likely to have? E.g. an attack with complexity
like 10^100 or 10^200 is pretty much worthless.
I'm curious: how then would such a huge lack of
"theoretical" security be a problem for actual
use when it's still not easily breakable? Is it
because that if an attack capable of cutting it
down that far is possible, there's almost certainly
an unknown one that cuts it down to the "feasible"
range?
Which I claim makes them even less
cryptographically useful. A period in excess of
about 2^80 doesn't contribute anything to real
world use. But large states/keys have to be dealt
with securely.
>Before you make claims like and "certainly much less"
>and "might be", wouldn't it be better to have actually done
>such an analysis on the two new generators?
OK. Looking just at the "32-bit MWC generator",
which I repost here for convenience:
static unsigned long Q[4194304],carry=0;
unsigned long b32MWC(void)
{unsigned long t,x; static int j=4194303;
j=(j+1)&4194303;
x=Q[j]; t=(x<<28)+carry;
carry=(x>>4)-(t<x);
return (Q[j]=t-x);
}
#define CNG ( cng=69069*cng+13579 )
#define XS ( xs^=(xs<<13), xs^=(xs>>17), xs^=(xs<<5) )
#define KISS ( b32MWC()+CNG+XS )
Let's assume that the contents of Q, cng and xs
are initialized completely randomly and
independently of each other. Note that the initial
value of carry is explicitly set to zero.
We assume that the output stream from KISS
is known to the attacker (standard cryptographic
assumption); call this output stream Z_i, for i = 1 ...
Some observations:
1) at every stage, carry will usually be only 28
bits in length. The exception is when the high
order 28 bits of the input Q value are all zero,
carry will (almost never, but depending on the
input carry value) become 0xFFFFFFFF. This is
rare enough not to worry about in the attack
below.
1a) if you know the input value Q[t], the output
carry value is almost entirely defined by the most
significant 28 bits. 15/16ths of the time, whether
the optional subtraction of 1 happens is determined
by comparing the least significant and most
significant 4 bits of the input Q[t], independent
of the input carry.
2) at every stage t, the output of b32MWC will be
the input value of Q[t mod 2^22] to the
calculation of b32MWC 2^22 steps later. (In case
anyone hadn't noticed, 4194304 == 2^22. Let's call
R=2^22 for simplicity later.)
3) CNG and XS are easily invertible (see my paper).
4) b32MWC is also invertible; if you know both the
input Q[j] and the output Q[j] not only can you
derive the output carry, you can also derive the
input carry.
5) given (3) and (4), you can run the generator
backwards if you need to.
Now an attack:
1. Start by guessing (enumerating) the 2^64 possible
pairs (xs_0 and cng_0).
2. Run the subgenerators CNG and XS forward R+3 rounds,
and calculate M_{i-1} = Z_i - cng_i - xs_i (for 1 <= i <= R+3).
These are possible candidate values for the Q[i-1 mod R]
values.
3. If we guessed the initial values correctly in the
first step, then M_0 should be the input Q[0], and M_R
should be the output Q[0] after R steps. From these, we
can calculate carry_R, and from carry_R and M_{R+1}
(which we hope equals output Q[1]) we can check
whether, in fact, the computation given input and output
Q[1], and what we just calculated, agree with each other.
If they don't, go back to step 1. If they do (which might
happen randomly), do the same for R+2, and R+3. At this
point, if all checks worked, with pretty overwhelming
probability we must have guessed CNG and XS correctly.
4. Run b32MWC backwards to recover the initial settings
of Q[i].
Now the expected work to do this is about 2^85 times
the work to generate output words, which is a pretty
normal measure. (That's 2^64 / 2 (since you expect to
search half the values) times about 2^22 words generated
(because of the huge state space.).
Another way to look at it is that it's about 2^63 times
the cost of doing a key setup (independent of how you
actually set up that huge key!).
Either way, it is not cryptographically secure. The same
attack strategy works for the 64-bit version, but of course
the complexity roughly doubles to 2^126.
Now this is almost exactly the opposite of the way I
did the attack on the original KISS. That's because
increasing the size of one of the sub-generators
just made the other ones the weak point.
Greg.
--
I've just posted an attack on this new KISS with
complexity around 2^86 "basic operations" or 2^63
"key setup operations". Even these numbers are
impractical attacks, today, but are much worse
than anything a cryptographer would consider
acceptable.
Greg.
--
Supposing 2^86 basic operations:
Makes 7.737125245533627e+25 operations.
Assuming a double core machine making 2 billion basic
operations per second makes
3.868562622766814e+16
Supposing you use 1000 machines, that makes it
38 685 626 227 668,135
You run your 1000 machines 365 days /year:
(365*24*3600) --> 31 536 000 seconds in a year
Result: 1 226 713,160441024067732 years,
more than a MILLION years until you crack it.
Supposing machines get 100 times as fast as today
in the next 10 years, you still would have for more
than 12 thousand years (12 267,131)
That may be true, but it's irrelevant since "attacks get only better".
Greg probably didn't spend much time on it, the point is that the
existence of such an attack indicates the possibility of much better
attacks. Finding them may take a lot of work, but somebody would do it
if it was important enough. Any new crypto PRNG/cipher/hash/whatever
gets analyzed by many people for many years before it gets accepted.
Finding any attack, even one even much more impractical than above is
a definite stop sign.
Yes, what Maaartin said. Even I said "impractical" above.
But the point is, we have stream ciphers and PRNGs
with far higher resistance to attack, with state sizes
of a few hundred *bits*.
KISS (any of them) is a fine PRNG for simulations,
stand-alone games, and stuff like that. But I
counsel against using them for anything where security
is important. That's all I'm saying.
Greg.
--
It took about an hour to think of it, and a couple of
hours to write it up, but I had the advantage of
the time I spent on the previous paper, knowing what
the primitives were and how they behaved. But it's
a straightforward application of divide-and-conquer
techniques.
Greg.
--
do you know what i think:
every public cripto sys are broken by definition;
is it more chanches that KISS is not now breakable
than all other "cripto secure" RNG are not breakable
don't know if above is true, but i think it.
Whose definition?
> is it more chanches that KISS is not now breakable
Well given that Greg already broke it ...
> than all other "cripto secure" RNG are not breakable
Nobody said they're "not breakable" the currently held view of
unbroken ciphers/CSPRNGs is that they're "unbroken."
> don't know if above is true, but i think it.
If ignorance were a language you'd be considered fluent in it.
Tom
I had an insight the other day that speeds up
this attack by a factor of about 2^22. I don't
know why I didn't think of it earlier. Quoting
from the updated paper:
...However this attack can be significantly
optimized. First we note that most of the
generated values of from CNG and XS are not used
in the important validation step; of the $R+3$
pairs of values generated, only the first three
and last three are used. Secondly, we note that
both generators can be jumped forward a known
amount (in our case $R$ steps) with very much less
complexity than simply evaluating the functions
$R$ times. In the case of XS, we can precompute
the update matrix, and compute $xs_R$ by
multiplying $xs_0$ by it. Similarly, CNG can be
jumped forward using a modular ``square and
multiply'' exponentiation calculation.
When enumerating the possible initial values of XS
and CNG, there is no reason to enumerate them as
counters. Since the values all fall on cycles (a
single cycle in the case of CNG) the values can be
enumerated in the sequence produced by the cycle.
For example, we start with $cng_0 = 0$, and
quickly calculate $cng_R$ from that. By iterating
CNG we easily calculate $cng_1 .. cng_3$ and
$cng_{R+1} .. cng_{R+3}$ as required above. After
testing this value (and presumably failing), we
can test the next value by discarding $cng_0$,
using the old $cng_1$ as the new $cng_0$,
similarly rolling over $cng_R$ and so on. The
different cycles of XS (as detailed in Appendix
\ref{app:shr3}) can be handled in basically the
same way. Only once the correct values for $cng_0$
and $xs_0$ have been identified is it necessary to
run the generators $R$ steps to recover the
initial values of the $Q$ array. Parallelization
of this attack is a bit more tricky, but still
easy. With this optimization, about $2^{63}$
operations on average can be expected to recover
the key. Since the $Q$ array is so large, this
corresponds to about $2^{41}$ key setup
operations.
Greg.
--
Greetings.
Are you aware that the OP, George Marsaglia, died in February a few
weeks after starting this thread? I just found out last week and was
thus quite surprised when I saw this thread which was started by him.
I missed the start of the thread when it was current and didn't pick
up on the January dates when I first glanced at it today. For a second
I thought that somebody had made a sick practical joke on Wikidpedia
(where I read about his death) and that he was alive after all -- but
then I noticed the date of the original post. Sad. It is impressive
how he stayed intellectually active long after his retirement. It
seems appropriate that he began this thread (which google shows to be
his second to last post) by refering to his desire to climb
mathematical Mount Everests.
> Are you aware that the OP, George Marsaglia, died in February a few
> weeks after starting this thread?
Indeed. How sad.
> A sudden heart attack while walking on the grounds of Capital City
> Country Club in Tallahassee on [2011-02-15] took the life of Dr.
> George Marsaglia. He was born to John and Mabel Marsaglia in Denver,
> Colorado, in 1924.
I was unaware of his death, and sorry to lose
a great person.
Having said that, cryptology is made stronger
by attacking the algorithms, and I hope people
understand that I was not attacking Marsaglia
himself.
Greg.
--
Of course you weren't attacking him. I can't imagine a better way to
honor him than to take his ideas seriously. There is no reason for you
to stop your efforts to find cryptographical weaknesses in his last
suggested RNG.
> Greg.
> --
> Greetings.
>
> Are you aware that the OP, George Marsaglia, died in February a few
> weeks after starting this thread?[snip]
His contributions to random number generation, including his well-known
test package, will certainly be long remembered by many people.
BTW, I believe it is correct that one good use of PRNGs with huge
periods is to appropriately combine them with other PRNGs (xor etc.)
so that one obtains sequences of huge periods that are much more
difficult to analyze in cryptological sense than the original PRNGs.
M. K. Shen
Well, I disagree, for the previously stated
reasons. But to reiterate them:
1) to get huge periods you require huge states,
which for cryptographic purposes is an albatross
around your neck,
2) I think I have demonstrated that even with
combined PRNGs, you often still don't get
cryptographic security. So it shouldn't be
claimed or even hinted at, lest people believe
it.
Greg.
--
For some purposes, bypassing standard numbers and dealing with random
character generators seems to work well. It's the counterintuitive
thing here, that all numbers are not created equally but it should be
noted how much information they can represent as elements in higher
unique bases along with their special base attributes store more
information than in lower ones. I realize that many can't fathom this
but it is something you can hang your hat on.