The distributed chaffin method

103 views
Skip to first unread message

Newsy McNews

unread,
Jan 30, 2020, 12:42:04 PM1/30/20
to Superpermutators
...has very clearly hit a brick wall. With our current methods I see no way this isn't going to take years before getting anywhere. But I noticed a while back that the result for wasted characters 146 has 718 permutations, only two permutations away from a potential 871! Maybe there's some way to optimize the results for w = 119+ to get 2 permutations out of it. w = 119+ seems highly unoptimized, so based on absolutely nothing but wishful thinking, it could be possible!

oh no.png

Seraphina Nix

unread,
Jan 30, 2020, 7:37:44 PM1/30/20
to Superpermutators
My assumption was that no more progress was being made due to initial interest dropping off resulting in people no longer contributing compute, and thus progress slows to a halt. If that is true, it would not indicate that w=119 is a huge brick wall, but just that early progress was made only due to the allocation of 500,000+ CPU hours in a matter of weeks, which is no longer happening (that's more than 1000 cores contributing at once which is pretty cool!). I don't know whether that is true, but it's my best guess, and seems pretty likely.

The closeness of the highest result is pretty tantalizing! I haven't seen any results for whether doing the Chaffin method for all intermediate waste counts is the fastest, or whether skipping some intermediate steps is best. I thought about trying different schemes on lower n, but it's not obvious if n=4 and n=5 will generalize well enough to n=6 to be worth paying much attention to. It wouldn't be hard to run some experiments, I just haven't been working on the problem myself for months either. IIRC there were some improvements in the code that kicked in at w=120, but how much they will help is also unknown. At the end of the day, we need a lot of compute to push through the Chaffin method and that either requires a lot of public interest, or a lot of money. Or someone comes up with a better algorithm!

Simon Butcher

unread,
Feb 8, 2020, 3:23:52 PM2/8/20
to Superpermutators


On Friday, January 31, 2020 at 12:37:44 AM UTC, Seraphina Nix wrote:
My assumption was that no more progress was being made due to initial interest dropping off resulting in people no longer contributing compute, and thus progress slows to a halt. If that is true, it would not indicate that w=119 is a huge brick wall, but just that early progress was made only due to the allocation of 500,000+ CPU hours in a matter of weeks, which is no longer happening (that's more than 1000 cores contributing at once which is pretty cool!). I don't know whether that is true, but it's my best guess, and seems pretty likely.

The website suggests total tasks for n=6 is 16,123,620. Whether this is for the whole of n=6 or just 118, the number of tasks for 118 has had some serious computational power (equivalent of many thousands of $/£/€ )behind it for the past  9 months and has eclipsed all the other iterations. We don't even know if the end is in sight for 118 either :(

wasteperms goal# of tasks CPU hours   teranodes    
result
11561829,318 1,007   497
116622381,284 68,048   5,660
116621229,639 34,071   3,913
1176252,199,254 382,206   
   45,349
 

Tim Buchheim

unread,
Feb 11, 2020, 1:32:14 PM2/11/20
to Superpermutators
It looks like roughly an order of magnitude increase in computation every time w increases. I'm guessing we'll finish 118 at around 450,000 teranodes. (We're at 321,000 right now.) I still have many of my CPUs running on team HMC CS but I'll have to stop a lot of them in May when our summer research projects ramp up.

Simon Butcher

unread,
Feb 11, 2020, 2:03:55 PM2/11/20
to Superpermutators
Hmmm. I ran 1400 cascade lake cores for around 24 hours at the weekend to stress test some new machines, and processed around 6M giganodes, probably the cloud equivalent cost of over £1000, Is n=119 going to be processed using the same algorithm? If it follows the same magnitude increase, it would arguably be too computationally difficult to perform.

Jay Pantone

unread,
Feb 12, 2020, 2:26:48 PM2/12/20
to Simon Butcher, Superpermutators
Thanks to Simon, Tim, and everyone who has been pointing resources toward this project! At this point, there are only about 10 teams contributing and the number of cores at work is around 10% of what it was at our peak, which is a big reason that waste=118 has taken so long.

It's certainly possible that the task before us is too monumental to be feasible, but I see two reasons for optimism. 
 (1) In the current waste=118 task, we are proving that as waste increases from 117 to 118, at most 4 new permutations can be added. The pre-computation we've done shows that as waste increases from 118 to 119, at least 5 new permutations can be added (assuming we are not surprised with a more efficient superpermutation with waste=118 than we have previously seen). Since only the case of 6 new permutations will need to ruled out, it's reasonable to think that the proportional increase in work between waste=118 and waste=119 will not be as high as the increase from waste=117 to waste=118.
 (2) Greg Egan has noted in the n=5 computation that there is a substantial speed-up at waste=24. Accordingly, there may be a speed up in the current n=6 computation when we reach waste=5!=120. How substantial will this be? I have no idea.

So, personally, I think it's too early to give up! I'm keeping my 60 cores working on this for now. 

- Jay


--
You received this message because you are subscribed to the Google Groups "Superpermutators" group.
To unsubscribe from this group and stop receiving emails from it, send an email to superpermutato...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/superpermutators/955645f2-67a0-4e23-a2af-932a545b5b97%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages