The language generation in the limit framework [Kleinberg, Mullainathnan, NeurIPS 2024] is as follows. There is a known family F of languages (subsets of {0,1}^*). An adversary elects one of the languages L from F and starts writing down its words in some order. In each step, our task is to indicate a word that belongs to L but hasn’t been written down so far by the adversary. It turns out that for any countable F there is an algorithm that achieves this goal for all but finitely many steps.
I will explain a recent work of Kleinberg and Wei [Focs 2025], where it is shown how to do this and make sure that the subset of L that we generate is dense in it. That is, I will explain a simple version of their result – one can make sure that for infinitely many n, among the first n words of L in the lexicographic order, we have generated at least 0.49n. They also show that this is possible for all but finitely many n (and for some small positive constant instead of 0.49).