AIT machine learning challenge

Skip to first unread message

Aidan Rocke

Apr 5, 2021, 10:46:23 AM4/5/21
Hello AIT mailing list, 

I am a French applied mathematician based in Amsterdam that is 
currently organising a machine learning challenge at the intersection 
of algorithmic information theory and number theory: 

Following the recommendation of Marcus Hutter, I decided to share 
this challenge with members of this mailing list. 


Aidan Rocke

Aidan Rocke

Apr 6, 2021, 11:16:50 AM4/6/21
to Algorithmic Information Theory
After discussing the question of prizes with a close friend, I decided to make 
a few concessions with human nature: 

The best submission by a team or individual combining strong experimental and
theoretical work, presented before the 3rd of November 2021, will be awarded
a 150 cL Château Lafite-Rothschild Magnum from 2012 on the 24th of December 2021.

I will add that since I won't necessarily be able to respond to every single query 
that comes my way, I have added a Frequently Answered Questions section: 

Aidan Rocke

May 10, 2021, 8:30:08 PM5/10/21
to Algorithmic Information Theory
Hello AIT mailing list, 

An algorithmic information theorist recently brought up the relevance of the Copeland-Erdős constant but there 
were errors in his analysis. With an information-theoretic definition of normal numbers, which is consistent 
with the usual definition, we may derive compression bounds for any normal number including the 
Copeland-Erdős constant: 

This analysis does not rule out the usefulness of the machine learning challenge as it provides
strong experimental evidence for the holographic principle, provided that you know what you 
are looking for of course. The law of conservation of information which dates back to Von 
Neumann is a useful starting point and I have attached a derivation which requires a basic 
understanding of matrix algebra. In any case, a complete theory of algorithmic information 
must start with the law of conservation of information if it is to explain the unreasonable 
effectiveness of Occam's razor in the natural sciences.  

I could say more about this but this week I will prioritise questions concerning normal numbers. 

Take care, 

Aidan Rocke

Aidan Rocke

Jun 18, 2021, 4:55:10 AM6/18/21
to Algorithmic Information Theory
Hello AIT mailing list, 

A recent exchange with a Swedish number theorist, on Cramér's conjecture, motivated 
me to simplify the presentation of the Monte Carlo Hypothesis for probabilistic number 
theorists and clarify its implications for both Cramér's conjecture and the Riemann Hypothesis. 
I'd like to add a few points that are important but have not been explicitly stated. 

The main technical innovation is to reformulate certain propositions in analytic number theory 
that many take for granted, such as the Prime Number Theorem, in order to carefully assess 
their information-theoretic implications. Philosophically, I am partly motivated by Gregory Chaitin's 
complexity-bound on Gödel numbers whose truth-value is decidable. This suggests that experimental 
mathematics is eventually necessary for sufficiently-complex mathematical propositions. 

Another key motivation comes from the physics of the Riemann Hypothesis but this is not 
something that can be easily summarised in a few pages. 


Aidan Rocke

p.s. a colleague that does machine learning at Oxford encouraged me to submit this as a neurips challenge. 
If that goes through, in 2022, I will write a book on experimental mathematics for AI researchers with a focus 
on both open problems as well as algorithmic methods. You may be interested in the work of Yang-Hui He: 

Aidan Rocke

Jun 19, 2021, 3:01:41 PM6/19/21
to Algorithmic Information Theory
p.s. I think I'd add that on the first page for the partial derivative (eq. 5) I am implicitly using an 
Occam's razor approach to non-linear approximations which is intuitive but currently 

The complete theory, which includes its application to special functions in general, will probably 
be worked out in a month or two. 

Aidan Rocke

Jun 25, 2021, 2:17:00 PM6/25/21
to Algorithmic Information Theory
Hello AIT mailing list, 

I'd like to share a critical observation which occurred to me recently, which relates the randomness of the distribution of primes and the computational complexity of integer factorisation. 

So far the hypothesis that prime encodings are random has been carefully evaluated using a variation of Yao's next-bit test as well as a primality test for all prime numbers under 10^9. In agreement with the hypothesis, the true positive rate of primality tests approximated using deep learning does not exceed 50%. While I am implicitly 
using the universal approximation property of deep neural networks, a couple days ago it occurred to me that feedforward neural networks can only simulate 
computable functions with polynomial time complexity relative to their input size. 

This leads to a refinement of the original hypothesis using time-bounded Kolmogorov Complexity that is consistent with the assumption that underlies RSA encryption and other cryptographic protocols; integer factorisation can't be solved in polynomial time. More details are included in the note attached. 





Jun 26, 2021, 2:48:51 AM6/26/21
to Aidan Rocke, Algorithmic Information Theory
Hello Aidan,  I'm afraid that the theoretical problem of P vs. NP is not solved so we can't argue that a Neural Network can not compute with polinomial complexity something that we don't know how to compute in polinomial complexity. 

More information:
You received this message because you are subscribed to the Google Groups "Algorithmic Information Theory" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
Reply all
Reply to author
0 new messages