A complete list of repeated strings in enwik8

265 views
Skip to first unread message

Thomas Graf

unread,
Dec 19, 2018, 11:31:29 AM12/19/18
to Hutter Prize
As a member of the mostly silent majority of folks tackling the Hutter Prize i finally got some progress which might be worth discussing:

I currently work on the preparating phase of removing as much redundancy as possible, and as - at least for me - half of the fun in coding is developing new algorithms, this took me a while...

Yesterday i finally got my first complete list of all repeated strings with length 5 or more characters and their positions in enwik8. As my way of doing this prefers the longest possible strings, i first got a list of strings with length 100 or more characters, including the maybe-or-not-so-wellknown longest sequence of natural language starting two bytes before "The unique contribution" with way over 5000 characters.

This first list took about 3 hours on a fairly normal PC using only one core (so far), the complete list took about 17 hours (but i think there is room for improvement concerning the runtime).

This (not yet confirmed) list consists of almost 6 million sequences which are a repetiton of - interestingly - also about 6 million characters, which gives us about 94% redundancy in enwik8, and this yet without considering strings of length 2 to 4.


My questions to the community:

Is this some new information or just wellknown facts (an "old hat" as we Germans say)? I couldn't find much information about this topic beyond http://mattmahoney.net/dc/textdata.html

What runtime should be expected for such a complete list of repeated strings in enwik8 on a standard PC? Are several hours OK or just far out? Having the list i can now face the next phase of compression, so the initial runtime doesn't bother me much as long as it stays significantly below one day (in case i want to repeat it), but i'm curious what others might have accomplished so far.


Thanks for your time,
Thomas

Matthias Gallé

unread,
Dec 19, 2018, 11:49:15 AM12/19/18
to hutter...@googlegroups.com
Thomas,

I run some code that computes all maximal repeats on enwiki8. This is an efficient implementation which uses an extended suffix array. It took  1m6.041s user time on a single core of 2.20GHz

--mg


--
You received this message because you are subscribed to the Google Groups "Hutter Prize" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hutter-prize...@googlegroups.com.
To post to this group, send email to hutter...@googlegroups.com.
Visit this group at https://groups.google.com/group/hutter-prize.
For more options, visit https://groups.google.com/d/optout.

Thomas Graf

unread,
Dec 19, 2018, 5:48:47 PM12/19/18
to Hutter Prize
Ok, wow, that sounds really impressive!

Now that i know i'm far out, can you at least confirm the numbers i stated? Did you try to use this redundancy in compressing enwik8, and if yes, did it help?


Thanks,
Thomas

Mirek Kaim

unread,
Dec 19, 2018, 11:40:43 PM12/19/18
to Hutter Prize
what's better, 5000 characters repeated once, or 2 characters repeated 5000 times (for example)? what probability can you assign to such 5k char string, and what probability can you assign to such 2 chars? how does it influence the compression?

after you'll compress all longer sequences using shorter ones, what will you end up with? a bunch of low-probability sequences? millions of them?

James Bowery

unread,
Dec 19, 2018, 11:47:32 PM12/19/18
to Hutter Prize
Have you all looked at the literature for "the smallest grammar problem"?

Here's a relatively recent paper:


--

Thomas Graf

unread,
Dec 23, 2018, 9:51:20 AM12/23/18
to Hutter Prize
Those are all very good questions, the answers i intend to find out the hard way, and i'm excited what i'll find... But what i expect (or maybe just hope?) is a positive effect on compression, and yes, i might be wrong.

James Bowery

unread,
Dec 23, 2018, 10:41:24 AM12/23/18
to Hutter Prize
I suppose I owe Matthias Gallé an apology as I didn't notice that he was a coauthor of The Generalized Smallest Grammar Problem when I used the phrase "you all".  I'd encourage Thomas Graf and Mirek Kaim to read his paper(s) which provide the theoretic basis of his claim.  Perhaps Matthias can link to some source code?

Matthias Gallé

unread,
Dec 23, 2018, 5:12:07 PM12/23/18
to hutter...@googlegroups.com
James,

No need of apologizing for mentioning one's paper :D.
Payam had uploaded the code for that paper at https://github.com/payamsiyari/GSGP . I think the repeats folder there contains (a modified version of) the code for computing a number of different classes of repeats I run the other day: Thomas, this might be useful for you.

--mg



James Bowery

unread,
Dec 23, 2018, 6:30:10 PM12/23/18
to Hutter Prize
Thanks for the code, Matthias.

I decided to torture it with this perl program as input to see if it would come up with the appropriate lexicon:

for(0..10000){print ( (('00000000') x 1, ('01010101010101') x 2, ('10101010101010') x 3, ('11111111111111') x 4)[rand(10)])}

I invoked with:

$  python2 Post-g1fix-irr.py -a p -t c mygeneratedfile.txt

In addition to a large number of non-terminal redirects such as:

N2664 ->  N37

I was rather surprised to find these terminals:

N13 ->  0 0 0 0 0 0 0
N489 ->  1 1 1 1 1 1 1

The "word": '11111111111111' consists of 2 x N489 reflected in:

N9 ->  N489 N489

The "word": '00000000' is reflected in:

N33 ->  N13 0

I suppose there could be other non-terminals for those "words".  

Is there a better way of invoking the GSGP program than the one I used, or is lexicon induction outside the scope of GSGP?

Matthias Gallé

unread,
Dec 27, 2018, 4:41:45 PM12/27/18
to hutter...@googlegroups.com
James,

I believe that those unitarian rules come up when GSGP performs a generalization. If you see A -> B | C then A is the new non-terminal which generalizes B and C, because they occur in the same context.

Lexicon induction is *the* scope of the smallest grammar problem (the rest is "easy", as in polynomial). Now, the formulation of that problem favours approaches that tends to create words that are repeated often, so that unique words (such as "print" in your example) will never be part of a yield. For that to happen, you should probably give it a large number of different perl files.

hope this helps

--mg

James Bowery

unread,
Dec 27, 2018, 5:37:12 PM12/27/18
to Hutter Prize
The "print" is part of the program that generates the input to GSGP.  Reducing the "for(0..10000)" to "for(..100)", the actual input to GSGP looked more like:


jabowery@jabowery-PC ~/dev/Hutter/GSGP
$ perl gen.pl
1111111111111111111111111111010101010101010101010101010111111111111111000000001010101010101010101010101010101010101010101010101010101011111111111111010101010101011111111111111111111111111111101010101010100101010101010101010101010101101010101010101010101010101011111111111111101010101010101111111111111110101010101010101010101010101111111111111110101010101010000000001111111111111111111111111111010101010101011010101010101001010101010101010101010101011010101010101001010101010101111111111111111111111111111110101010101010111111111111111010101010101001010101010101010101010101011010101010101011111111111111111111111111111111111111111101010101010101010101010101011010101010101011111111111111111111111111110101010101010111111111111111101010101010100101010101010111111111111111000000001111111111111111111111111111111111111111111010101010101011111111111111111111111111111010101010101010101010101010111111111111111010101010101011111111111111010101010101011010101010101011111111111111101010101010101111111111111100000000010101010101011111111111111101010101010101111111111111111010101010101010101010101010101010101010101010101010101010101010101010111111111111111010101010101011111111111111101010101010101010101010101010101010101010000000001010101010101011111111111111000000000101010101010110101010101010111111111111110101010101010111111111111111111111111111110000000001010101010101
jabowery@jabowery-PC ~/dev/Hutter/GSGP

So I hope you can see why I rather expected something out of GSGP like:

N13 -> 0 0 0 0 0 0 0 0
N489 -> 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Nwhatever -> 0 1 0 1 0 1 0 1 0 1 0 1 0 1
Nwhichever -> 1 0 1 0 1 0 1 0 1 0 1 0 1 0

and then a "noise" string consisting of random occurrences of N13, N489, Nwhatever and Nwhichever since that string could not be further compressed as it is selected by the "rand()" function (unless one figures out how to model the pseudonumber generator within the minimal grammar).

Thomas Graf

unread,
Jan 6, 2019, 4:23:12 PM1/6/19
to Hutter Prize
Thanks everybody for the references! I had a close look at the source code, and while i couldn't figure out the algorithm behind it, i at least found the title of the work of Puglisi et al, and then found their original paper. It is, however, a hard read for a layman, so references to secondary articles (if existing) about it are welcome.

So far i can't see yet if (and if yes: why) this algorithm would generate the same (or a similar) list as my way with O(n*n), but when i'm done with my current ideas i'll try to dig deeper and do some experimental compares.

Matthias Gallé

unread,
Jan 7, 2019, 9:49:50 AM1/7/19
to hutter...@googlegroups.com
James,

Thank you for clarifying my misunderstanding with respect to the input to the grammar induction and apologies for not returning your e-mail.

I run sequitur [1] on top of your sequence and obtained the grammar below [2] (the backslashed symbols are original symbols, to differentiate them from non-terminals which are integers). 
As you can see rule 14 corresponds to your N13; 24 to N489 and 9 to Nwhatever  although there is no equivalent to Nwichever (which gets specialized with rules 11 and 19, depending on if the following strings starts with 0 or 1). Is this more in line to what you were looking for?

--mg

[2] 
0 -> 1 1 2 3 3 4 2 5 6 7 7 3 8 9 8 10 11 12 12 13 13 9 13 14 15 16 11 17 18 10 16 8 18 9 19 10 10 20 21 19 20 20 22 23 18 24 6 25 26 25 27 26 28 19 23 29 14 30 21 27 30 27 4 31 32 33 31 34 34 33 5 \1 31 \n 
1 -> 35 35 \1\1\1\1\1\1\1\1
2 -> 1 35 \1\1\1\1\1\1\1\1\1\1\1\1
3 -> 36 36 \0\1\0\1\0\1\0\1
4 -> 3 36 \0\1\0\1\0\1\0\1\0\1\0\1
5 -> \1 \1 \1\1
6 -> 14 \1 \0\0\0\0\0\0\0\0\1
7 -> 4 4 \0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1
8 -> 2 \1 \1\1\1\1\1\1\1\1\1\1\1\1\1
9 -> 4 32 \0\1\0\1\0\1\0\1\0\1\0\1\0\1
10 -> 8 5 \1\1\1\1\1\1\1\1\1\1\1\1\1\1\1
11 -> 37 38 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\0
12 -> 37 9 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1
13 -> 15 37 \0\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\0\1\0\1\0\1\0\1\0\1\0\1
14 -> 39 39 \0\0\0\0\0\0\0\0
15 -> 32 8 \0\1\1\1\1\1\1\1\1\1\1\1\1\1\1
16 -> 8 19 \1\1\1\1\1\1\1\1\1\1\1\1\1\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1
17 -> 19 4 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1
18 -> 11 37 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\0\1\0\1\0\1\0\1\0\1\0\1\0\1
19 -> 37 32 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\1
20 -> 1 5 \1\1\1\1\1\1\1\1\1\1
21 -> 17 32 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1
22 -> 35 5 \1\1\1\1\1\1
23 -> 40 41 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1
24 -> 41 22 \1\1\1\1\1\1\1\1\1\1\1\1\1\1
25 -> 24 24 \1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1
26 -> 24 9 \1\1\1\1\1\1\1\1\1\1\1\1\1\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1
27 -> 9 9 \0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1
28 -> 41 35 \1\1\1\1\1\1\1\1\1\1\1\1
29 -> 9 33 \0\1\0\1\0\1\0\1\0\1\0\1\0\1\1\1\1\1\1\1\1\1\1\1\1\1\1
30 -> 29 23 \0\1\0\1\0\1\0\1\0\1\0\1\0\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1
31 -> 14 9 \0\0\0\0\0\0\0\0\0\1\0\1\0\1\0\1\0\1\0\1\0\1
32 -> \0 \1 \0\1
33 -> 28 \1 \1\1\1\1\1\1\1\1\1\1\1\1\1
34 -> 40 22 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\1\1\1\1\1\1\1\1\1\1\1\1
35 -> 5 5 \1\1\1\1
36 -> 32 32 \0\1\0\1
37 -> \1 4 \1\0\1\0\1\0\1\0\1\0\1\0\1
38 -> \0 \0 \0\0
39 -> 38 38 \0\0\0\0
40 -> 19 22 \1\0\1\0\1\0\1\0\1\0\1\0\1\0\1\1\1\1\1\1\1
41 -> 22 5 \1\1\1\1\1\1\1\1

James Bowery

unread,
Jan 7, 2019, 10:17:35 AM1/7/19
to Hutter Prize
No.  I've actually written "C" code to compress enwik8 and enwick9 that uses Nevill-Manning's SEQUITUR as a preprocessor to discover lexicon for Hecht-Nielsen's Cogent Confabulation and it fails precisely because the 0 symbol ends up factored into a nonsense lexicon, as happened with your 11 and 19.  Essentially, I was looking for something that would extract some syntactic structure for input to something that would extract semantic structure.  I think Hecht-Nielsen has adequately demonstrated Cogent Confabulation extracts semantics if fed an adequate lexicon.

I'd be happy to share that code with you if you like.

James Bowery

unread,
Jan 12, 2019, 7:19:27 PM1/12/19
to Hutter Prize
Have you gotten the code for Evo-Lexis to run?

https://github.com/jabowery/Evo-Lexis

I can't figure out from the github source how to invoke it with a test pattern such as the one I wrote. The only thing I was able to do successfully was create a directory which I called "working_path" and invoke:

$ python Target-Generator.py working_path

which produces a seemingly endless stream of output looking like:

...
Batch length 9 20 26.305 64 12 0 1 49 151
Batch length 10 25 28.83 17 21 0 1 131 69
number of targets generated so far: 610
Generating batch...
Batch length 1 16 15.395 18 33 0 1 31 169
Batch length 2 26 48.37 13 36 1 0 193 7
Batch length 3 25 20.69 50 8 1 0 131 69
Batch length 4 27 25.34 6 9 0 1 161 39
Batch length 5 29 23.765 18 7 1 0 153 47
Batch length 6 26 24.855 73 59 0 1 33 167
Batch length 7 25 26.66 61 81 1 0 122 78
Batch length 8 16 29.91 27 67 1 0 197 3
Batch length 9 24 24.8 40 17 1 0 55 145
Batch length 10 20 19.93 47 16 1 0 154 46
number of targets generated so far: 620
Generating batch...
Batch length 1 23 22.525 65 35 0 1 181 19
Batch length 2 27 26.37 4 50 1 0 137 63
Batch length 3 29 35.67 26 59 1 0 86 114
Batch length 4 12 13.05 37 47 0 1 199 1
Batch length 5 30 34.82 27 77 1 0 194 6
Batch length 6 20 20.0 33 36 0 1 128 72
Batch length 7 15 25.55 8 38 0 1 166 34
Batch length 8 16 19.28 84 42 1 0 162 38
Batch length 9 24 18.96 44 74 0 1 168 32
Batch length 10 18 16.265 54 0 0 1 23 177
number of targets generated so far: 630
Generating batch...
Batch length 1 18 25.61 74 59 1 0 113 87
...

Matthias Gallé

unread,
Jan 13, 2019, 4:57:05 PM1/13/19
to hutter...@googlegroups.com
you could contact the creator (Payam Siyari): https://www.scs.gatech.edu/content/payam-siyari . Payam recently defended his PhD, but I could guess that he is still interested in helping others use his code

--mg

Payam Siyari

unread,
Jan 15, 2019, 10:36:17 AM1/15/19
to Hutter Prize
Hi everyone,
Glad that the code in Github is useful here.
One thing I should note is that Evo-Lexis idea (and code) is not tailored for compression and is an evolutionary model for modeling of hierarchical systems. Details here:
https://arxiv.org/abs/1805.04924

Please let me know if there are issues or questions regarding GSGP.

James Bowery

unread,
Jan 15, 2019, 12:00:36 PM1/15/19
to Hutter Prize
It seems to me that a reasonable heuristic for natural language modeling would be something along the lines of Evo-Lexis's "hourglass" DAG in which the inputs (sources) and outputs (targets) are identical natural language strings.  Practically speaking this would involve the incremental rather than "clean-slate design").  Quoting from the paper:

Evo-Lexis computes an (approximate) minimum-cost adjustment of a given hierarchy when the set of targets changes over time (a process we refer to as “incremental design”). For comparison purposes, Evo-Lexis also computes the (approximate) minimum-cost hierarchy that generates a given set of targets from a set of sources in a static (non-evolving) setting (referred to as “clean-slate design”).

Reply all
Reply to author
Forward
0 new messages