When this sequence of 'n' 1's is ended, a_(n+2) / a_1 = 1.5^n
Whether or not the sequence will continue to increase after the initial
1's run out,
I have no idea.
I found this out by accident. I cannot prove it, but it is true for n
< 30 and k=1.
Regards
Bill
When k=1, that's also 2^(n+1) - 1, so it's a Mesenne Number.
It's binary representation is all 1s, so for n=30 k=1, it is
1111111111111111111111111111111
in binary.
> seem to be the longest.
Close, but not true. For a given bit length, Mersenne Numbers
have twice the mean stopping time of random bit patterns of
the same length, but they are generally not record holders
because record holders can have as much as 13 times the
mean. My guess is that the record holders are unbounded,
that means there may not be any limit on just how much larger
the record holders can be than the mean.
> Typically such seeds will have Sequence Vectors which start out with
> "n" 1's.
Right. It turns out that a contiguous block of binary 1s is a
resonant pattern. Resonance directly affects the sequence
vector. Any resonance will cause a repeating pattern in the
sequence vector. In this case, the repeating pattern is 1.
>
> When this sequence of 'n' 1's is ended, a_(n+2) / a_1 = 1.5^n
In terms of bits, you'll find that when the block of 1s is
exhausted, the remaining bits are 1.585 times the original
block size. When you multiply a binary number by 3, you
are guaranteed to get at least one more bit. You might get
two bits if a carry propagates past the MSB. On average,
you will get 1.585 bits (why? 1.585 is log(3)/log(2)).
But, at the least significant end, the resonating all 1s pattern
creates exactly one 0 each time a 3n+1 operation is executed.
Thus, 3n+1 and n/2 operate in lockstep mode (which is why
you get 1s in the sequence vector) until the original block is
exhausted. Thus, the number gains 0.585 bit per cycle, so if
you start with 1000 bits, you'll have 1585 bits left when
the pattern terminates.
> Whether or not the sequence will continue to increase after the initial
> 1's run out,
It won't. The bit pattern that remains has the same
statistical distribution of 1s and 0s as a random pattern,
a negative binomial (specifically, a geometric) distribution.
Since the mean of a geometric distribution is the inverse of
the probability, you'll find that in each cycle of a random
pattern, there will be a mean two 0s appearing at the LSB of the
binary number which means there will be a mean of two bits
lost per cycle (and subsequently, you'll see a lot of 2s in the
sequence vector). And since the mean growth at the MSB is still
1.585 bits per cycle, we end up with a net loss of 0.415
bits per cycle which is why convergence begins when the
all 1s pattern is exhausted.
See:
<http://members.aol.com/mensanator666/collatz/bang0010.htm>
> I have no idea.
>
> I found this out by accident. I cannot prove it,
Neither can I, mainly because it isn't exactly true.
See:
<http://members.aol.com/mensanator666/Page.htm>
That shows the actual difference between statistics
and reality. Note, though that although they rarely hit
the red line exaclty, they are generally in the neighborhood.
As the number of bits increases, the prediction (percentwise)
gets better (as noted on the very last page of Appendix B
in my paper).
> but it is true for n
> < 30 and k=1.
I know you can find examples in that range that
exceed the number in question. Just make sure you are
not comparing apples to oranges. If the resulting pattern
has 24 1 bits, you want to compare against other binary
patterns that are also 24 bits and you'll find cases that
easliy exceed twice the mean.
>
> Regards
>
> Bill
CORRECTION: n 1's are generated by 2*2^n - 1
Yeah, the original formula gives n+1 1s in the sequence vector, not n.
I wasn't concerned about that and in my example I had 31 1s. What's
important is why we get n 1s in the sequence vector.
Have you tried it when k=2? That's the first member of what I call
the "floating 0" group (all patterns with n-1 binary 1s and exactly
1 binary 0 where the MSB and LSB are always 1). For n=5,
the "floating 0" members are:
10111
11011
11101
If n is large, say 100, you'll see a counter-intuitive result. As the
lone 0 "floats" from the MSB to the LSB, the value of the number
increases due to an ever decreasing power of 2 being subtracted
from 2**n-1. Yet, even though the value increases, the stopping
time decreases (shorter sequence vector). It's counter-intuitive
because we generally think that larger values produce longer
sequences.
But if 100 1 bits give me a sequence vector with 100 1s, then
since the first "floating 0" number has only 98 contiguous 1-bits,
its sequence vector starts out with 98 1s, and "11011...1" has 97
1s in the sv, "111011...1" has 96, etc. So this makes sense,
the pattern of the bits is more important than the fact that
"10111...1" is a smaller value than "11101...1".
And even more interesting, there are several resonant points
where the stopping time rockets back to levels similar to the
first value. It's all endlessly fascinating.