Planning the Second Symposium on Algorithmic Information Theory (AIT) & Machine Learning @ The Alan Turing Institute - Call for Interest

72 views
Skip to first unread message

Boumediene Hamzi

unread,
Apr 15, 2025, 10:47:11 AMApr 15
to ai...@googlegroups.com
Dear all,

We are currently exploring the organization of the Second Symposium on AIT & ML, tentatively planned for the week of July 28th, 2025, at the Alan Turing Institute.
This follows the success of the first edition (4–5 July 2022, London),
which brought together researchers working on the interface of ML and AIT.
🧠 If you're interested in contributing a talk or participating, please fill this short form:


The event will likely run over 2–3 days, depending on interest.
Feel free to share with others working at the interface of AIT and ML. Looking forward to shaping another exciting edition!

The Alan Turing Institute is a limited liability company, registered in England with registered number 09512457. Our registered office is at British Library, 96 Euston Road, London, England, NW1 2DB. We are also a charity registered in England with charity number 1162533. This email and any attachments are confidential and may be legally privileged. If you have received it in error, you are on notice of its status. If you have received this message in error, please send it back to us, and immediately and permanently delete it. Do not use, copy or disclose the information contained in this message or in any attachment. DISCLAIMER: Although The Alan Turing Institute has taken reasonable precautions to ensure no viruses are present in this email, The Alan Turing Institute cannot accept responsibility for any loss or damage sustained as a result of computer viruses and the recipient must ensure that the email (and attachments) are virus free. While we take care to protect our systems from virus attacks and other harmful events, we give no warranty that this message (including attachments) is free of any virus or other harmful matter, and we accept no responsibility for any loss or damage resulting from the recipient receiving, opening or using it. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or be incomplete. If you think someone may have interfered with this email, please contact the Alan Turing Institute by telephone only and speak to the person dealing with your matter or the Accounts Department. Fraudsters are increasingly targeting organisations and their affiliates, often requesting funds to be transferred to a different bank account. The Alan Turing’s bank details are contained within our terms of engagement. If you receive a suspicious or unexpected email from us, or purporting to have been sent on our behalf, particularly containing different bank details, please do not reply to the email, click on any links, open any attachments, nor comply with any instructions contained within it, but contact our Accounts department by telephone. Our Transparency Notice found here - https://www.turing.ac.uk/transparency-notice sets out how and why we collect, store, use and share your personal data and it explains your rights and how to raise concerns with us.
favicon.ico
Message has been deleted

jabowery

unread,
Jul 29, 2025, 8:48:16 AMJul 29
to Algorithmic Information Theory
I've been using statistical MDL on my models that are trained on the same data to decide which to believe.  It occurred to me that this should be standard industry practice so I wrote up a description of a different "leaderboard" paradigm in which there is one leaderboard per dataset and rankings are by a symbolically regressed statistical MDL.

https://gemini.google.com/share/6219530ebb81

A couple of caveats:  Gemini Pro 2.5 concluded with a Pareto Frontier which is a bit specious since the whole idea is to get away from Pareto Frontiers to a single metric.  Also, the symbolic regression must, itself, be subjected to a similarly principled reduction to bit quantity -- which can be done but that is a minor consideration.

Nor is this a minor quibble.  Divorcing an influential industry from its roots with convenient fictions like MSE+Lregularization has deleterious effects on technological civilization.

Image

jabowery

unread,
Aug 31, 2025, 8:45:01 PMAug 31
to Algorithmic Information Theory

jabowery

unread,
Sep 1, 2025, 5:52:29 PMSep 1
to Algorithmic Information Theory

A communique to a colleague of Ray Solomonoff’s regarding mdllosstorch, who has been working on the philosophy of causality from a POV/ansatz that I consider a potential rival of Solomonoff’s:

I’m spelunking the tunnel reality of Solomonoff’s POV: Turing Machines are the ansatz of natural science. With a minor diversion into Goedelian refinement of Kolmogorov Complexity* my purpose has been to follow that tunnel to its logical end:

Discovery of causation, not from experimental controls but from the data itself through program search for the smallest generative algorithm (ie: Algorithmic Information Criterion for causal model selection) of a given dataset.

This tunnel reality of mine has its limitations. Any science that lacks experimental controls to discern causation must grapple with this limitation. They all, even though they refuse to admit it, adopt Solomonoff’s prior. So I feel my spelunking this particular tunnel has merit. Yes, it may be a dead end. Reductio ad absurdum has precisely such merit.

Give my aforementioned humility, please indulge my fantasy as an aside to your quest for a philosophy of causality:

Yesterday I released a Python package called mdllosstorch that is my effort to gently guide the machine learning industry’s multi-hundred billion dollar juggernaut to a more Solomonoff-esque definition of “loss function”. It provides a differentiable form of minimum description length. This approximates algorithmic information when applied to state space neural network models (those used by Anthropic’s Claude among others). It does so by approximating my Goedelian refinement of Kolmogorov: Any recurrent neural network can approximate a Directed CYCLIC Graph of N-input NOR gates – hence my Goedelian trick.

As I’m quite aware, you are on the warpath against diagonalization. To the extent Goedel’s trick, hence my own, may rely on just such a fallacy, I’m not here to expound on the virtues of what I’ve accomplished except within the aforementioned humility.

– Jim

* TLDR: Write a program, in a chosen instruction set, that simulates the directed cyclic graph of NiNOR (or NiNAND) gates that provides the chosen instruction set, including the memory that holds the program. Use that instruction set to write a program that, when executed by the instruction set simulation program, outputs the given dataset. The minimum sum of the length, in bits, of these two programs is the NiNOR-Complexity.

Reply all
Reply to author
Forward
0 new messages