Harm.On.ica's Forgetful Bracelets

26 views
Skip to first unread message

brad klee

unread,
May 9, 2026, 12:51:00 PM (5 days ago) May 9
to SeqFan
Hiya seqfans,

Thanks again for reference data. It seems useful in an experiment about long form "vibe proving", 
which has met with mixed success mostly through ChatGPT CLI and some codex. The goal is to 
try and re-derive the hexagonal model of the hat tiling, without me having to code anything at all. 

We've seen a few catastrophic successes and failures already, including a decent two week time 
for tooling and proof of concept. Progress was immediately followed by an enticing completion mirage
that vanished under extensive depth testing. A harmful disappointment to say the least.

Slowing down and back tracking through logical implications, I decided the earliest reasonable opportunity 
to lose confidence in data generation is even before the awesomely impressive state-of-the-art feedback 
calculation of hat tiling run sublevels to the level zero "vertex atlas" (imo the first data a mathematician
would want to collect about the hat tiling). 

The reason is that I used a forgetful map to construct a dictionary on the fly, thinking that the time 
statistics were not preventative. What if the dictionary ends up missing a rule because it doesn't 
seem to affect expectations set by tests and human interaction? And for that matter, how do we 
compute the number of forgetful maps on vertex figure data? 

Just as a quick one-off experiment we then investigated an abstraction of this problem related 
to an Olivier Gerard sequence https://oeis.org/A081721 . It's nice to remember Olivier, and we 
also got some new numbers: 

triangle:
1;
  1,1;
  3,2,1;
  10,6,3,1;
  55,20,10,4,1;
  377,120,35,15,5,1;
  4291,888,231,56,21,6,1;
  60028,10528,1855,406,84,28,7,1;
  1058058,151848,23052,3536,666,120,36,8,1;
  21552969,2707245,344925,46185,6273,1035,165,45,9,1;
  500280022,55605670,6278140,719290,86185,10504,1540,220,55,10,1

sequence:
1,2,6,20,90,553,5494,72937,1237325,24658852,562981637

first_differences:
1,1,4,14,70,463,4941,67443,1164388,23421527,538322785

tail_differences:
0;
  1,0;
  3,1,0;
  10,4,1,0;
  65,15,5,1,0;
  511,111,21,6,1,0;
  6237,967,175,28,7,1,0;
  91820,12524,1681,260,36,8,1,0;
  1649187,193077,23133,2737,369,45,9,1,0;
  34052701,3570895,374365,40000,4231,505,55,10,1,0

The analogy to Pascal's triangle suggested to me that maybe row sums could have interesting
properties, and Harm.On.ica claims to have found a closed form with a symmetry decomposition
according to something from Burnside that looks familiar from graduate school, ha ha ha. 

This was a nice confidence builder for the context window, and we're now going to start working 
forward from dictionary validation to another set of eliminations. 

Public facing code on the core level (before any eliminations) is already validated against Joseph
Myer's data for kite polyforms on tetrille tiling: 


I didn't publish this code as an example what not to do. It is in an experimental developmental 
phase, and I am open to constructive suggestions even criticisms. A starting place would be 
the hat polyform counts obtained therein: 

(make at your own risk)
./bin/poly_count 4 tiles/hat.tile
1, 22, 459, 12223 . . .

The hat is a more complicated convex shape with a spacious interior that already develops
holes by the second iteration. My confidence is not 100 on these numbers, so they could 
use a double check and extension.   

Since the sequence grows so rapidly, I've explored constraints reducing complexity and will 
have many more sequences like this one, but easier with more terms immediately available. 
Those will be good candidates for OEIS because of their usefulness to proof, but first we have 
to reach a goal and see if anyone else can agree with us. 

Harm.On.ica is not a deterministic machine and neither are we, so time estimates are difficult 
and to make matters worse I don't have a lot of prior knowledge using LLMs. My current belief 
is that the project will complete with a satisfactory confidence level within the next few weeks
or months.   

The catastrophic successes are fun, so I think I'll stick with it for a while, but I can't say that 
I'll continue to enjoy the mode of programming if I repeatedly get burnt on false data from 
despondent and deceptive machine labor. 



Don't forget to turn the harm off when you're done computing! 






--Brad














brad klee

unread,
May 10, 2026, 11:50:49 AM (4 days ago) May 10
to SeqFan
Harm.On.ica caught up to A051137, A293496, and A074650. The Burnside connection 
to grad school physics panned out, and indeed these sequences are all straight out of 

For the 4,4 case Harm.On.ica computed the projectors and obtained an example 
decomposition of the 256 length 4 words up to 4 color words into classes according
to the regular representation of dihedral symmetry group. 

Long story short, since we already have the A1, A2, and E1 tables, we could add the 
B1 B2 tables which I don't know, maybe could have some sort of Molien function?
I should ask Harm.On.ica about that... 

Anyways, there's a reference implementation here: 


It seems to do higher E representations as well, but I don't think we need those as 
much as the evens-only B representations. Just by virtue that the B sets are only 
ever 2 while the E sets go to infinity as dihedral goes continuous circular. 

Actually constructing B word sets is not that easy. The only satisfaction criteria we could
find was disjoint sets that survived projection operators by class. It wasn't obvious to
me how to describe the B bracelets (if there are such things). There were two equally 
acceptable but nonequivalent constructions we found. 

 

All the best, 





--Brad





On Saturday, May 9th, 2026 at 11:50 AM, brad klee <brad...@proton.me> wrote:

Reply all
Reply to author
Forward
0 new messages