Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

October Paper Club: models that prove their own correctness

22 views
Skip to first unread message

Quinn Dougherty

unread,
Sep 19, 2024, 2:02:07 PM9/19/24
to guaranteed-safe-ai

How can we trust the correctness of a learned model on a particular input of interest? Model accuracy is typically measured *on average* over a distribution of inputs, giving no guarantee for any fixed input. This paper proposes a theoretically-founded solution to this problem: to train *Self-Proving models* that prove the correctness of their output to a verification algorithm V via an Interactive Proof. Self-Proving models satisfy that, with high probability over a random input, the model generates a correct output *and* successfully proves its correctness to V. The *soundness* property of V guarantees that, for *every* input, no model can convince V of the correctness of an incorrect output. Thus, a Self-Proving model proves correctness of most of its outputs, while *all* incorrect outputs (of any model) are detected by V. We devise a generic method for learning Self-Proving models, and we prove convergence bounds under certain assumptions. The theoretical framework and results are complemented by experiments on an arithmetic capability: computing the greatest common divisor (GCD) of two integers. Our learning method is used to train a Self-Proving transformer that computes the GCD *and* proves the correctness of its answer.


summary slide deck by Syed

Lewis Hammond

unread,
Sep 19, 2024, 4:16:08 PM9/19/24
to guaranteed-safe-ai, Quinn Dougherty
I have been chatting a bit with one of the authors on this paper recently (Orr) and am planning to meet with him in person next week – he is a grad student at Berkeley and might be prepared to drop into the meeting to chat about the paper if there was sufficient interest. Let me know if it would be helpful for me to ask him!
--
You received this message because you are subscribed to the Google Groups "guaranteed-safe-ai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to guaranteed-safe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/guaranteed-safe-ai/CAGaOgARiz70JELMwmaJsKjZJHZQcfgqdB6titQTjtAr%2B%2BqWrXQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Kris Carlson

unread,
Sep 19, 2024, 5:19:40 PM9/19/24
to Lewis Hammond, guaranteed-safe-ai, Quinn Dougherty
Absolutely. We had good luck at the Rowland seminar inviting authors to help us understand their work and didn't have Zoom then. But so many were in, or often traveling through, Boston/Cambridge. Please do ask him.

Syed Jafri

unread,
Oct 17, 2024, 3:50:08 PM10/17/24
to guaranteed-safe-ai
Thanks all for joining the discussion was really lively, and it was awesome to have Orr join! I've attached the slidedeck from today. 
GSAI Reading Group_ Models That Prove Their Own Correctness.odp

Syed Jafri

unread,
Oct 17, 2024, 3:53:56 PM10/17/24
to guaranteed-safe-ai
The formatting didn't seem to work well here's the google slides link.

Ronak Mehta

unread,
Feb 5, 2025, 6:29:54 PMFeb 5
to Syed Jafri, guaranteed-safe-ai
Potentially interesting seminar next week, following up on our discussion about this paper last October. Isaac Levine from FAR asked me to share. It is in person, so for anyone in the Berkeley/Bay area:

FAR.AI is hosting a session of Previews, an event series where small groups of specialists gather to discuss emerging and/or significant projects in their shared field of interest.

In this session, Orr Paradise (with co-author and Turing Award Winner Shafi Goldwasser) will present recent work on training transformers that prove their own correctness via interactive proof systems (paper here) The discussion will explore a novel learning framework where models not only generate outputs but also engage in formal verification protocols, with theoretical guarantees against adversarial behavior.

The format will include both a technical deep dive into the learning algorithms and an interactive discussion of immediate applications to AI safety research.

Orr's Preview session will take place February 13 at 1PM at FAR.Labs in Berkeley. The session will last approximately one hour.

If you're interested in participating, please fill out this form.

Reply all
Reply to author
Forward
0 new messages