AGI has been achieved!

148 views
Skip to first unread message

John Clark

unread,
Dec 20, 2024, 9:08:03 PM12/20/24
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
I believe a Century from now historians will say that the age of AI started about three hours ago, because that's when OpenAI revealed their O3 model and the results are earth shattering, it beats humans on just about any benchmark you care to name, even the ARC benchmark. And if that isn't  Artificial General Intelligence then what is?  



John K Clark    See what's on my new list at  Extropolis
ogi

Brent Meeker

unread,
Dec 20, 2024, 11:25:22 PM12/20/24
to everyth...@googlegroups.com
Ok, o3 if you're so smart how do we keep Trump from screwing up our democracy?

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv0tZqzZO_ik%2B4Jqb0DtMN6ZsbUuoYTx7Y62vYgvNRNZyA%40mail.gmail.com.

Giulio Prisco

unread,
Dec 21, 2024, 12:19:20 AM12/21/24
to extro...@googlegroups.com, 'Brent Meeker' via Everything List
On Sat, Dec 21, 2024 at 3:08 AM John Clark <johnk...@gmail.com> wrote:
>
> I believe a Century from now historians will say that the age of AI started about three hours ago, because that's when OpenAI revealed their O3 model and the results are earth shattering, it beats humans on just about any benchmark you care to name, even the ARC benchmark. And if that isn't Artificial General Intelligence then what is?
>
> OpenAI Unveils o3! AGI ACHIEVED!
>
> o3 - wow
>

I'm watching with a wait-and-see attitude. Perhaps it's too early to
claim that AGI has been achieved, but the next few years are going to
be interesting to say the least.

> John K Clark See what's on my new list at Extropolis
> ogi
>
> --
> You received this message because you are subscribed to the Google Groups "extropolis" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to extropolis+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/extropolis/CAJPayv0tZqzZO_ik%2B4Jqb0DtMN6ZsbUuoYTx7Y62vYgvNRNZyA%40mail.gmail.com.

Cosmin Visan

unread,
Dec 21, 2024, 4:15:30 AM12/21/24
to Everything List

=)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =))
=)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =))
=)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =))
=)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =))

Cosmin Visan

unread,
Dec 21, 2024, 4:15:53 AM12/21/24
to Everything List
Clown Woooorld!


=)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =)) =))

Cosmin Visan

unread,
Dec 21, 2024, 4:17:52 AM12/21/24
to Everything List
@Brent. Shut up you woke communist! In case you don't know, you are a straight white male. It that woke feminazi would have won the election, you would have been the first to be exterminated. Be glad that Trump won!

PGC

unread,
Dec 21, 2024, 5:14:05 AM12/21/24
to Everything List

Not enough detail for any conclusions. According to the blog post on arcprize.org (see here: https://arcprize.org/blog/oai-o3-pub-breakthrough), 

“OpenAI’s new o3 system—trained on the ARC-AGI-1 Public Training set—has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public leaderboard $10k compute limit. A high-compute (172x) o3 configuration scored 87.5%.” 

This excerpt is central to the discussion. The blog is announcing what it calls a “breakthrough” result, attributing the model’s performance on an evaluation set to the new “o3 system.” The mention of the “$10k compute limit” probably refers to a constraint or budget allocated for training and/or inference on the public leaderboard. Additionally, there is a statement that when the system is scaled up dramatically (172 times more compute resources), it manages to score 87.5%. The difference between the 75.7% result and 87.5% result is thus explained by a large disparity in the computational budget used for training or inference.

More significantly, as I’ve highlighted in bold, the model was explicitly trained on the very same data (or a substantial subset of it) against which it was later tested. The text itself says: “trained on the ARC-AGI-1 Public Training set” and then, in the next phrase, reports scores on “the Semi-Private Evaluation set,” which is presumably meant to be the test portion of that same overall dataset (or at least closely related). While it is possible in machine learning to maintain a strictly separate portion of data for testing, the mention of “Semi-Private” invites questions about how distinct or withheld that portion really is. If the “Semi-Private Evaluation set” is derived from the same overall data distribution used in training, or if it contains overlapping examples, then the resulting 75.7% or 87.5% scores might reflect overfitting/memorization more than genuine progress toward robust, generalizable intelligence.

The separation of training data from test or evaluation data is critical to ensure that performance metrics capture generalization, rather than the model having “seen” or memorized the answers in training. When a blog post highlights a “breakthrough” but simultaneously acknowledges that the data used to measure said breakthrough was closely related to the training set, skeptics like yours truly naturally question whether this milestone is more about tuning to a known distribution instead of a leap in fundamental capabilities. Memorization is not reasoning, as I’ve stated many times before. But this falls on deaf ears here all the time.

Beyond the bare mention of “trained on the ARC-AGI-1 Public Training set,” there is an implied process of repeated tuning or hyperparameter searches. If the developers iterated many times over that dataset—adjusting parameters, architecture decisions, or training strategies to maximize performance on that very same distribution—then the reported results are likely inflated. In other words, repeated attempts to boost the benchmark score can lead to an overly optimistic portrayal of performance. Everybody knows that this becomes “benchmark gaming,” or in milder terms, an accidental overfitting to the validation/test data that was supposed to be isolated.

The blog post mentions two different performance figures: one achieved under the publicly stated $10k compute limit, and another, much higher score (87.5%) when the system was scaled up 172 times in terms of compute expenditure. Consider the costliness of such experiments for so small of a performance bump! That’s not a positive sign for optimists regarding this question.

AI models can indeed be improved—sometimes dramatically—by throwing more compute at them. However, from the perspective of practicality or genuine progress, it may be less impressive if the improvement depends purely on scaling up hardware resources by a large factor, rather than demonstrating a new or more efficient approach to learning and reasoning. Is this warranted if the tasks the model is solving do not necessarily require advanced reasoning skills that a much smaller system (or a human child) can handle in a simpler way?

If a large-scale AI system with a massive compute budget is merely matching or modestly exceeding the performance that a human child can achieve, it undercuts the notion of a major “breakthrough.” Additionally, children’s ability to adapt to novel tasks and generalize without being artificially “trained” on the same data is a key part of the skepticism: the kind of intelligence the AI system is demonstrating might be narrower or more brittle compared to natural, human-like intelligence.

All of this underscores how these “breakthrough” claims can be misleading if not accompanied by rigorous methodology (e.g., truly held-out data, minimal overlap, reproducible results under consistent compute budgets). While the raw numbers of 75.7% and 87.5% might look impressive at face value, the context provided—including the fact that the ARC-AGI-1 dataset was also used for training—casts doubt on the significance of those scores as an indicator of robust progress in AI or alignment research.

I leave you with a quote from the blog: 

Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don't think o3 is AGI yet. o3 still fails on some very easy tasks, indicating fundamental differences with human intelligence.

Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3, potentially reducing its score to under 30% even at high compute (while a smart human would still be able to score over 95% with no training). This demonstrates the continued possibility of creating challenging, unsaturated benchmarks without having to rely on expert domain knowledge. You'll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible.

And even with that, the memory vs. reasoning problem doesn’t vanish. You believe that your interlocutor is generally intelligent or not. But I’ve repeated this so many times, it’s getting too time consuming to keep responding in detail. Thanks to John anyway for posting past all the narcissism needing therapy spam here. It’s getting tedious, I have to agree with Quentin. Less and less posts with big picture + good level of nuances. 

Cosmin Visan

unread,
Dec 21, 2024, 6:15:04 AM12/21/24
to Everything List
"indicating fundamental differences with human intelligence."

Dude! Human intelligence is the property of consciousness of being able to bring new ideas into existence out of nothing. You cannot simulate such a thing.
Omg... so many children! When will you people ever grow up ?

John Clark

unread,
Dec 21, 2024, 7:21:02 AM12/21/24
to everyth...@googlegroups.com
Is there anything we could do about Cosmin Visan? I'm getting a little tired of scrolling through a seemingly endless stream of nothing but =))  characters. 

 John K Clark


Cosmin Visan

unread,
Dec 21, 2024, 8:31:39 AM12/21/24
to Everything List
Yes, because is the ultimate laugh to believe that a bag of bricks can become conscious. Even the belief in Santa Claus is more rational.

John Clark

unread,
Dec 21, 2024, 9:01:40 AM12/21/24
to everyth...@googlegroups.com
On Sat, Dec 21, 2024 at 5:14 AM PGC <multipl...@gmail.com> wrote:

there is a statement that when the system is scaled up dramatically (172 times more compute resources), it manages to score 87.5%. The difference between the 75.7% result and 87.5% result is thus explained by a large disparity in the computational budget used for training or inference.

Yes, and if O3 had been given even more time it would've scored even higher, to me that indicates that the fundamental problem of AGI has been solved, and now it's just a question of optimizing things to make them more efficient. And if history is any guide that won't take long, today much smaller more compute efficient models can equal the performance of huge compute hungry state of the art models of just a few months ago. 

It's bizarre to realize that just a month and a half ago the majority of people in the USA thought the major problems facing the country were the trivial issues of illegal immigration and transsexual bathrooms, and that's why Donald Trump will be the most powerful hominid on earth during the most critical period in the entire history of his Homo sapiens species.  
 

the model was explicitly trained on the very same data (or a substantial subset of it) against which it was later tested. The text itself says: “trained on the ARC-AGI-1 Public Training set”


I don't see how the fact that O3 was trained on the ARC-AGI-1 Public Training set could be considered cheating when the ARC people are the ones who released the ARC-AGI-1 Public Training set for the precise purpose, as its name indicates, of training AIs.

Beyond the bare mention of “trained on the ARC-AGI-1 Public Training set,” there is an implied process of repeated tuning or hyperparameter searches.

Yes, because that's what "training an AI" means 

children’s ability to adapt to novel tasks and generalize without being artificially “trained” on the same data is a key part of the skepticism:


Human children need to go to school, so do newly born childish AIs. 

 
a quote from the blog:
"Passing ARC-AGI does not equate to achieving AGI, and, as a matter of fact, I don't think o3 is AGI yet." 

The average human taking the ARC test will receive a score of about 50%, some very exceptionally talented humans can get a score of around 80%. About one year ago, back in the stone age when the best AI's only scored about 2% on the ARC test, Francois Chollet, the author of the above quote and the originator of the ARC test, said that if a computer got a score above 75% he would consider it an AGI. But now that O3 can get a score of 87.5% if it thinks for a long time and 75.7% if it is only allowed a short time to think, Chollet has done what all AI skeptics have done since the 1960s, he has moved the goal post.  
 

Furthermore, early data points suggest that the upcoming ARC-AGI-2 benchmark will still pose a significant challenge to o3,


Yes, I'm certain computers will find it more difficult to get a high score on ARC-AGI-2, but human beings will find this new test to be even more difficult than computers do. Today's benchmarks are becoming obsolete because computers are rapidlymaxing them out, that's why we need ARC-AGI-2, it will be very useful in comparing one AGI to another AGI.

John K Clark    See what's on my new list at  Extropolis
4n1
 



Cosmin Visan

unread,
Dec 21, 2024, 2:42:22 PM12/21/24
to Everything List
ARC-AGI-78794312-SUPER-DUPER-MEGA-EXTRA! Only that will achieve consciousness!++

Brent Meeker

unread,
Dec 21, 2024, 3:50:22 PM12/21/24
to everyth...@googlegroups.com
Nobody asked for your opinion.  Wanna have a vote as too who on the list should shut up?

Brent



On 12/21/2024 1:17 AM, 'Cosmin Visan' via Everything List wrote:
@Brent. Shut up you woke communist! In case you don't know, you are a straight white male. It that woke feminazi would have won the election, you would have been the first to be exterminated. Be glad that Trump won!

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/888605d5-5f47-469a-b22d-8db3fa22e9f8n%40googlegroups.com.

Brent Meeker

unread,
Dec 21, 2024, 3:58:47 PM12/21/24
to everyth...@googlegroups.com



On 12/21/2024 3:15 AM, 'Cosmin Visan' via Everything List wrote:
"indicating fundamental differences with human intelligence."

Dude! Human intelligence is the property of consciousness of being able to bring new ideas into existence out of nothing.
So that's your problem.  You bring your ideas out of nothing. 
 
You cannot simulate such a thing.
How would you know?


Omg... so many children! When will you people ever grow up ?
They're already grown up.  Maybe with a little learning, you will.

Brent

Brent Meeker

unread,
Dec 21, 2024, 4:04:56 PM12/21/24
to everyth...@googlegroups.com
Cosmin (or whatever his real name it) says the answer is, "Yes".  So let it be so.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/b6747490-b794-4a8e-a454-adac5681207cn%40googlegroups.com.

Brent Meeker

unread,
Dec 21, 2024, 4:07:36 PM12/21/24
to everyth...@googlegroups.com



On 12/21/2024 6:00 AM, John Clark wrote:
> It's bizarre to realize that just a month and a half ago the majority
> of people in the USA thought the major problems facing the country
> were the trivial issues of illegal immigration and transsexual
> bathrooms, and that's why Donald Trump will be the most powerful
> hominid on earth during the most critical period in the entire history
> of his Homo sapiens species.

I thought it was Elon.

Brent

Russell Standish

unread,
Dec 21, 2024, 6:04:15 PM12/21/24
to everyth...@googlegroups.com
Time to sharpen your killfiles guys!

Don't feed the trolls.
> /e94fdc43-a22a-4d1d-8783-274a07acff32%40gmail.com.

--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------

PGC

unread,
Dec 22, 2024, 3:30:30 AM12/22/24
to Everything List
Considering you reply, I would like to clarify a few points from my perspective. First, I’m not a skeptic who wants AI to fail; on the contrary, I see value in developing more rigorous benchmarks that push AI beyond narrow optimization. While it’s true that scaling a model’s compute often improves performance—e.g., O3 going from 75.7% to 87.5%—that alone doesn’t prove that we’ve achieved “fundamental AGI.” In machine learning history, many benchmarks have been surpassed by throwing more resources at them, yet models often fail when faced with novel tasks.

Regarding the ARC-AGI-1 Public Training set, it’s obviously not entirely “cheating” to use it, and the effort is impressive, but when a system is trained on data very similar to the test set, there’s that risk of overfitting rather than demonstrating more genuine adaptability. Real-world intelligence typically shows up when an agent can handle new, unseen challenges without relying on repeated exposure of similar ones. Human children, for example, often solve a large portion of the ARC puzzles without specialized training or hyperparameter tuning. I’ve personally seen kids under age ten handle about 80 to upwards of 90% of the daily “play” tasks (besides acing the 6 problems on the landing page) on the ARC site once they grasp the basic rule of finding the rule, which suggests these particular puzzles might not be the best proxy for broad or “general” intelligence. They are quite fun, actually. 

As for the claim that François Chollet “moved the goalpost” once AI systems approached the 75% mark, it’s common in AI research for benchmarks to evolve precisely because scoring high on an older test doesn’t necessarily reflect deep, generalizable reasoning. The purpose of creating tougher challenges, such as the upcoming ARC-AGI-2, is not to deny progress but to ensure that models actually show robust capabilities rather than specialized or memorized skills. Humans may also find these new tasks difficult, but if the tests do a better job of measuring multi-domain adaptability, then both AI and human performance can be evaluated in a more meaningful way.

In short, I do share your excitement about recent strides in AI and welcome the idea that we should keep updating our tests as they become outmoded. I just want those tests to demand broader, more convincing reasoning and more novel problem-solving, rather than repeating data a model has already seen or memorized, spending millions on compute, and “tuning” on the fly by devs. We’re all on the same page in wanting to drive AI forward—and robust, carefully designed benchmarks help us see how we’re approaching the goal of increasingly “general” intelligence for practical purposes.

Cosmin Visan

unread,
Dec 22, 2024, 4:33:27 AM12/22/24
to Everything List
@Brent. Dear woke communist that wants to get a job without merit only based on his skin color and the person he has sex with (which unfortunately, did you look into the mirror ? you are white male. you will be the first to be exterminated), how do you simulate the nature of reality ? Is like simulating a generator and expecting to get free energy.

John Clark

unread,
Dec 22, 2024, 8:28:59 AM12/22/24
to everyth...@googlegroups.com
On Sun, Dec 22, 2024 at 3:30 AM PGC <multipl...@gmail.com> wrote:

While it’s true that scaling a model’s compute often improves performance—e.g., O3 going from 75.7% to 87.5%—that alone doesn’t prove that we’ve achieved “fundamental AGI.”

Francois Chollet is the man who invented the ARC test and he did everything he could think of to make his test as difficult as possible for a large language model. Back in the olden days of late 2023 when the best AIs only got scores in the single digits on the ARC test, Chollet said that if a machine ever got higher than 75% on his test then it should be considered an AGI. But when a machine did exactly that he immediately moved the goal post to some vague unspecified spot. 

And it's not just the ARC test, ALL the previous benchmarks that were supposed to determine when AGI has arrived are no longer of any use because they've all been maxed out. We need new more difficult benchmarks, not to compare AIs to humans because to my mind that debate is over, but in order to compare one AGI to another AGI.
 
In machine learning history, many benchmarks have been surpassed by throwing more resources at them, yet models often fail when faced with novel tasks.

Long before O3 came out it was obvious that computers were getting much better at being good at generalize tasks for example, back in the stone age of 2017 starting with zero knowledge and with just a few hours of self study, AlphaZero, could become good enough to beat the most talented human at chess and GO and shogi and ANY two player zero sum game.

 
I’ve personally seen kids under age ten handle about 80 to upwards of 90% of the daily “play” tasks (besides acing the 6 problems on the landing page) on the ARC site once they grasp the basic rule of finding the rule,

 
Once they grasp the basic rule of finding the rule yes. The first and probably the most difficult step very young children have in taking the ARC test is figuring out what question the ARC test is asking, only after they understand the question can children start thinking about an answer. And the more they play the ARC test the better they get at it. Exactly the same thing could be said about O3. 
 
As for the claim that François Chollet “moved the goalpost” once AI systems approached the 75% mark, it’s common in AI research for benchmarks to evolve precisely because scoring high on an older test doesn’t necessarily reflect deep, generalizable reasoning.

Alan Turing originally said that if you were only communicating over a teletype machine and a computer could convince you that you were communicating with another human being then that computer should be considered as intelligent as a human being. Computers blew past that benchmark about two years ago. 

Douglas Hofstadter, the author of my all-time favorite book Godel Escher Bach, said that if a computer could beat a Chess grandmaster then it would be intelligent, but he didn't expect that to happen in his lifetime. However it happened in 1997. Hofstadter now thinks computers are genuinely intelligent, and he's very frightened.

Then people said Chess was not a good benchmark but the game of GO was because it was astronomically more complicated than Chess, but a computer beat the best human GO player in 2016.

Then people said that for a computer to be an AGI it would need to be as good as most people at most things. And I think we're already there.

Then people said for a computer to be an AGI it would need to be better than EVERY human being at EVERYTHING, but that's not Artificial General Intelligence, that's Artificial Superintelligence. And we're almost there. 

The pattern is always the same.
A computer will never be able to do X.
And then a computer does X.
Well OK but a computer will never be able to do Y.
And then a computer does Y.
And then it's obvious they will soon run out of letters of the alphabet.  

John K Clark    See what's on my new list at  Extropolis   
5n3    

Cosmin Visan

unread,
Dec 22, 2024, 8:44:50 AM12/22/24
to Everything List
Exactly what I said: AGI is not enough. We need SUPER-AGI. But that will also not be enough. We will need SUPER-DUPER-AGI. And so on until the effect of the drugs wear off and we calm down from the hallucinations.

PGC

unread,
Dec 24, 2024, 12:13:40 PM12/24/24
to Everything List
John, the goalpost argument is so trivial to refute, I won't bother beyond: It's called progress. Also I actually don't give an f if I convince you or not because that is not my aim. My aim is just banter for informal informative purposes. But without reciprocity of moving the discussion forward, why discuss anything when you appear to be arguing "AGI is done, who needs more scientific progress?". I have to find loopholes in your arguments to believe that a good faith discussions is is taking place at all. Please extend beyond clichés, rhetorical tricks, oversimplification, and demonstrate some willingness to show good faith in moving the discussion forward.

I'll attempt again by highlighting a notable aspect of why large language models (LLMs) "feel intelligent," thus partially making your stance more correct from my pov, which lies in their ability to dynamically and contextually activate specific functional pathways within their architecture. Unlike the auto-complete features of a search engine that rely on simple, static predictions of commonly used phrases, LLMs engage in a limited form of functional or programmatic selection that enables them to construct complex, task-appropriate outputs. That's why I stated: "Not entirely cheating". When faced with a prompt, LLMs do not merely stitch together memorized sequences or statistical predictions in isolation; instead, they orchestrate and combine patterns in ways that do mimic application of limited subroutines or programs tailored to the task at hand.

This mechanism is most evident when LLMs generate outputs that require contextual coherence or task-specific problem-solving, such as composing multi-step reasoning in mathematical queries (Int Mathematical Olympiad) or writing syntactically correct and semantically relevant code. The architecture of an LLM, with its vast parameter space and layered design, allows it to encode and retrieve latent representations of patterns observed in its training data. When given input, the model activates portions of these representations that statistically align with the context and purpose of the task, simulating what appears to be reasoning or problem-solving. This dynamic activation—akin to selecting the right programs for a specific job—creates a marked difference from simpler systems like search engines, where responses are determined by direct matches or straightforward extrapolations.

For instance, an LLM solving a riddle or answering a complex question does so by leveraging patterns that mimic logical steps or dependencies, even though it lacks true understanding or abstraction capabilities. It feels different and "more intelligent" because this functional selection imparts a structured response that aligns with human expectations of reasoning. The selection process, while limited, allows the model to approximate behaviors that we associate with intelligence, such as adaptability and contextual awareness, within the constraints of its training.

I acknowledge that this perception of intelligence, due to these systems, of yours is therefore not entirely misplaced! You are not wrong. I've stated that this is more than just memory retrieval. The functional selection aspect of LLMs is genuinely different from earlier systems, reflecting a step toward more sophisticated interaction. However, I will also underscore, from my limited knowledge, that this is far from genuine intelligence or reasoning. LLMs are bound by their probabilistic nature and lack the ability to generalize beyond their training data, persistently learn, or generate higher-order abstractions. What they achieve is impressive within the confines of brute-force statistical modeling with this impressive novel layer of nuance and techniques, but it remains constrained compared to the more adaptive, goal-directed capacities that we'd expect of AGI. 

Nonetheless, the functional selection feature of LLMs is a key reason why they can convincingly emulate aspects of human intelligence and feel qualitatively different from simpler predictive systems. Now stop portraying me as anti AI/AGI or whatever because I don't care and won't let you mess around with my zen contemplating this. Instead, move the discussion forward beyond the transparent advocacy agendas. Of course, you want AGI to have all the answers and make us all immortal; but that isn't reasoning. It's theology, and not the refined kind. 

Enjoy your Holidays guys!



John Clark

unread,
Dec 24, 2024, 2:13:06 PM12/24/24
to everyth...@googlegroups.com
On Tue, Dec 24, 2024 at 12:13 PM PGC <multipl...@gmail.com> wrote:
 
simulating what appears to be reasoning or problem-solving.

Simulating?  If Einstein was only doing "simulated" thinking when he came up with General Relativity and not "real" thinking then how would things be any different? It seems to me that a problem has either been solved or it has not been, and simulated versus real has nothing to do with it.  

For instance, an LLM solving a riddle or answering a complex question does so by leveraging patterns that mimic logical steps or dependencies, even though it lacks true understanding

It's not clear to me how you know "it lacks true understanding".  If an AI can answer a question that you cannot, how can you have "true understanding" of it but the AI does not?  Did Einstein have true understanding of general relativity or only a simulated understanding?  

 It feels different and "more intelligent" because this functional selection imparts a structured response that aligns with human expectations of reasoning. 

If the vast majority of human beings think that X is more intelligent than Y then the simplest and most obvious explanation for that is that X is more intelligent than Y. And I don't understand how you could say that an AI it's not intelligent it's just behaving intelligently because you don't like the way its mind operates, the trouble is you don't have a deep understanding of how your own mind operates and even the people at OpenAI only have a hazy understanding of how O3 works even though they built it.  
 
this is far from genuine intelligence or reasoning. LLMs are bound by their probabilistic nature and lack the ability to generalize beyond their training data,

200 million protein structures were certainly not in any AI's training data, nor were superhumanly brilliant games of Chess and GO. The same thing could be said about the  Epic AI Frontier Math Test problems and the ARC benchmark.

> or generate higher-order abstractions.

I do not believe it's possible to solve ANY of the problems on the Epic AI Frontier Math Test, problems that even world-class mathematicians find to be very difficult, without the ability to generate higher order abstractions. But if I'm wrong about that then I would be astonished to learn that higher order abstractions are simply not important because the fact remains that, regardless of the method, the problem was solved.  
 John K Clark    See what's on my new list at  Extropolis  
rrz
 

Brent Meeker

unread,
Dec 24, 2024, 5:58:16 PM12/24/24
to everyth...@googlegroups.com
It seems that the question revolves around whether these very smart LLM's solve problems by developing a theory of the problem, something they could explain as a generic method, or does the solution come with no such generalized explanation...what we would call intuition in a human being.

Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/everything-list/CAJPayv3NM6jr57u%3D2XoQCh15jsqO%3Di3xGXjEJH2BF1OJPw3EdA%40mail.gmail.com.

PGC

unread,
Dec 25, 2024, 12:06:06 AM12/25/24
to Everything List
You see what I'm getting at. It’s true that from a purely outcome-based perspective—either a problem is solved, or it isn’t—it can seem irrelevant whether an AI is “really” reasoning or simply following patterns. This I'll gladly concede to John's argument. If Einstein’s “real reasoning” and an AI’s “simulated reasoning” both yield a correct solution, the difference might appear purely philosophical. However, when we look at how that solution is derived—whether there’s a coherent, reusable framework or just a brute-force pattern assembly within a large but narrowly defined training distribution—we begin to see distinctions that matter. Achievements like superhuman board-game play and accurate protein folding often leverage enormous coverage in training or simulation data. This can conceal limited adaptability when confronted with radically unfamiliar inputs, and even if a large model appears to have used “higher-order abstraction,” it may not have produced a stable or generalizable theory—only an ad hoc solution that might fail under new conditions.

Ultimately, the decision to rely on a powerful AI system or on traditional expertise depends on context and stakes. In math competitions or research tasks—situations where large training sets and repeated fine-tuning capture nearly all plausible conditions—state-of-the-art models shine, even if they’re performing an advanced form of pattern matching. But in high-risk domains like brain surgery, we risk encountering unforeseen circumstances well beyond any training distribution. The inability of current AIs to robustly handle such scenarios—unless we invest massive resources in real-time fine-tuning and developer supervision—makes these systems less practical and can slow down urgent decision-making. In those cases, trusting an AI’s specialized “intelligence” is inefficient compared to a competent human team prepared for the unexpected. The issue is not so much whether the AI’s process resembles “real understanding,” but whether the method is general enough to accommodate time-sensitive, unpredictable demands beyond its prior training.

And it's not just fancy brain surgery. Domains in which this apply and the associated number of problems appears quite vast. Other domains off the top in which this robust kind of generalization is critical are Self-Driving Vehicles in unmapped terrains or extreme weather: Unexpected obstacles and conditions require reliable out-of-distribution reasoning (And I don't care if it's better than humans, every life counts, so no need to go there). Or disaster response robotics: Robots exploring collapsed buildings or handling hazardous materials need the ability to handle chaotic, unfamiliar environments. Another one would be financial trading during market crises: Extreme market shifts can quickly invalidate learned patterns, and “intuitive” AI decisions might fail without robust abstraction and loose billions. How about nuclear power plant operations? Rare emergency scenarios, if not seen in training, could lead to catastrophic outcomes without genuine adaptability. And of course the unfortunately perennial military or defense systems domains and problems: Systems encountering enemy strategies or technologies not in training data will need human-level adaptability and creative problem-solving. Especially as opponents increasingly employ the same systems. I'm sure the distinction is more relevant. But it's Holidays, so please excuse the rough nature of this reply.

Brent Meeker

unread,
Dec 25, 2024, 12:46:51 AM12/25/24
to everyth...@googlegroups.com



On 12/24/2024 9:06 PM, PGC wrote:
> You see what I'm getting at. It’s true that from a purely
> outcome-based perspective—either a problem is solved, or it isn’t—it
> can seem irrelevant whether an AI is “really” reasoning or simply
> following patterns. This I'll gladly concede to John's argument. If
> Einstein’s “real reasoning” and an AI’s “simulated reasoning” both
> yield a correct solution, the difference might appear purely
> philosophical. However, when we look at how that solution is
> derived—whether there’s a coherent, reusable framework or just a
> brute-force pattern assembly within a large but narrowly defined
> training distribution—we begin to see distinctions that matter.
I think that LLM thinking is not like reasoning.  It reminds me of my
grandfather who was a cattleman in Texas.  I used to go to auction with
him where he would buy calves to raise and where he would auction off
one's he had raised.  He could do calculations of what to pay, gain and
loss, expected prices, cost of feed all in his head almost instantly. 
But he couldn't explain how he did it.  He could do it pencil and paper
and explain that; but not how he did it in his head.  So although the
arithmetic would be of the same kind he couldn't figure insurance rates
and payouts, or medical expenses, or home construction costs in his
head.  The difference with LLM's is they have absorbed so many examples
on every subject, as my grandfather had of auctioning cattle, that the
LLM's don't have reasoning, they have finely developed intuition, and
they have it about every subject.  Humans don't have the capacity to
develop that level of intuition about more that one or two subjects;
beyond that they have to rely on slow, formal reasoning.

Brent

John Clark

unread,
Dec 25, 2024, 7:57:46 AM12/25/24
to everyth...@googlegroups.com
On Wed, Dec 25, 2024 at 12:06 AM PGC <multipl...@gmail.com> wrote:

You see what I'm getting at. It’s true that from a purely outcome-based perspective—either a problem is solved, or it isn’t—it can seem irrelevant whether an AI is “really” reasoning or simply following patterns. This I'll gladly concede to John's argument.If Einstein’s “real reasoning” and an AI’s “simulated reasoning” both yield a correct solution, the difference might appear purely philosophical.

Thank you.  
 
However, when we look at how that solution is derived—whether there’s a coherent, reusable framework or just a brute-force pattern assembly within a large but narrowly defined training distribution—we begin to see distinctions that matter. Achievements like superhuman board-game play and accurate protein folding often leverage enormous coverage in training or simulation data.

In other words an AI, like a human being, needs to be educated. No human being can be introduced to Chess for the very first time and immediately play it at a grandmaster level, it takes a human years of study and it takes an AI hours of study.    
This can conceal limited adaptability when confronted with radically unfamiliar inputs, and even if a large model appears to have used “higher-order abstraction,” it may not have produced a stable or generalizable theory—only an ad hoc solution  that might fail under new conditions.
 
The same thing is true for human scientists. When in 1900 Max Planck first introduced the concept that the atom was quantized his idea fit the data for the blackbody radiation curve but it caused all sorts of other problems; for example the electron should constantly lose energy by radiating electromagnetic waves and crash in the nucleus, but that doesn't happen.  Even Planck thought his idea was just a mathematical trick that allowed him to predict what the blackbody radiation curve would look like at a given temperature. Quantum Mechanics wasn't fully fleshed out until 1927 and even today some think it's not entirely complete. Usually before you find a theory that can give you a perfect answer all of the time, you find a theory that can give you an approximate answer most of the time.        

in high-risk domains like brain surgery, we risk encountering unforeseen circumstances well beyond any training distribution.

Regardless of if your surgeon is human or robotic if, during your brain surgery, he or it encounters something never seen before in his or its education or previous practice then you're probably toast.  

 
  AI’s specialized “intelligence” is inefficient compared to a competent human team

Compared with humans current AIs are certainly inefficient if you're talking about energy consumption, but not if you're talking about time consumption. No human being could start with zero knowledge of Chess and become a grandmaster of the game after two hours of self study, especially if during those two hours he couldn't talk to other Chess players and had no access to Chess books; but an AI can.  

 
Other domains off the top in which this robust kind of generalization is critical are Self-Driving Vehicles in unmapped terrains or extreme weather:

Self-Driving cars are already safer than the average human driver in unmapped terrains or extreme weather, but for legal and societal reasons they will not become ubiquitous until they are MUCH safer than even the very safest human driver.  

 
 Another one would be financial trading during market crises: Extreme market shifts can quickly invalidate learned patterns, and “intuitive” AI decisions might fail without robust abstraction and loose billions.

In those extreme circumstances where seconds and even milliseconds become important I would feel far more comfortable with an AI managing my money than a human. 

 
How about nuclear power plant operations? Rare emergency scenarios, if not seen in training, could lead to catastrophic outcomes

And that's exactly what caused the Chernobyl disaster and the 3 mile island meltdown, and in both cases the operators of the reactors were human beings. The Fukushima nuclear accident was not caused by a reactor operator error but by the human reactor designers who thought it was a good idea to build a nuclear plant very near the ocean and put the emergency diesel generators in the basement even though it was directly above a major earthquake fault line. 

 John K Clark    See what's on my new list at  Extropolis  

5v2

PGC

unread,
Dec 26, 2024, 11:02:03 PM12/26/24
to Everything List

I’d note first that your analogy with chess and engine moves ignores an asymmetry: a grandmaster can often sense the “non-human” quality of an engine’s play, whereas engines are not equipped to detect distinctly human patterns unless explicitly trained for that. Even then, the very subtleties grandmasters notice—intuitive plausibility, certain psychological hallmarks—are not easily reduced to a static dataset of “human moves.” That’s why a GM plus an engine can typically spot purely artificial play in a way the engine itself cannot reciprocate. Particularly with longer sets of moves, but sometimes a single move suffices.

Chess self-play is btw trivial to scale because it operates in a closed domain with clear rules. You can’t replicate that level of synthetic data generation in, say, urban traffic, nuclear power plants, surgery, or modern warfare.  

On self-driving cars, relying on “training” alone to handle out-of-distribution anomalies can lead to catastrophic, repetitive failures in the face of curbside anomalies of various kinds (unusual geometries, rock formations, obstacles not in the training data etc). It is effin disturbing to see a vehicle costing x thousand dollars repeatedly slamming into an obstacle, change direction, only to do so again and again and again… 

Humans, even dumb ones like yours truly, by contrast, tend to halt or re-evaluate instantly after one near miss. Equating “education” for humans and “training data” for AI ignores how real-world safety demands robust adaptation to new, possibly singular events. 

Now you will again perform one of the following rhetorical strategies to obfuscate: 

1. Equate AI with human/Löbian learning through whataboutism or symmetry arguments

2. Undermine the uniqueness and gravity of AI limitations by highlighting human failures/mistakes, thereby diluting AI risks

3. Cherry pick examples of dramatic appearing AI advantages

4. Employ historical allusions to show parallels between AI and scientific discoveries. E.g. The references to Planck’s “quantized atom” and the slow, iterative acceptance of quantum mechanics cast doubt on the suggestion that AI’s ad hoc solutions are a unique liability. The argument is that scientists, too, often start with incomplete or approximate theories and refine them over time. While this is valid to a point, it conflates two different processes: humans can revise and expand theories with continuing insight, whereas most current AI systems cannot autonomously re-architect their own approach without extensive retraining or reprogramming. 

As well as others I don't want to bother with.

But this is irrelevant because in the last post, that wasn’t meant for you, I prepared a trap regardless, as I will only engage good faith arguments that push the discussion in new directions (I will engage more deeply with Brent’s reply for this reason). And, being your usual modest self, you took the bait, which ends my conversation with you on this subject for now as I will illustrate in the concluding section of this post. Even if I appreciate your input to this list, your posts hyping AI advances at this point, are becoming almost all spam/advertising for ideological reasons, as I will illustrate.

The bait was luring you to clarify your stance that “a problem is solved or it isn’t” while simultaneously implying that AI failure is more tolerable than human failure. That is self-contradictory. If correctness is correctness, then the consequences of AI mistakes should weigh at least as heavily as those of human missteps—yet you are clearly content to overlook AI’s more rigid blind spots. 

That’s rigid ideology/personal belief validation masking itself as discussion. It does not respect sincere and open discussions that make this list a resource imho. Employ your usual rhetorical tricks and split hairs as you wish in reply. Your horse in this race is revealed. You are free to believe in perfect AI’s and let them run your monetary affairs, surgery, life etc. I simply don’t care and wish you all the best with that. 

Russell Standish

unread,
Dec 26, 2024, 11:03:21 PM12/26/24
to everyth...@googlegroups.com
On this topic, I've started using new GenAI models in my work as a
software engineer. In the first example, I used ChatGPT as an
assistant to debug a particularly perplexing issue I had with Window
placement on Windows. The experience was kind of like pair programming
with a junior programmer, one who'd digested the Win32 API, and could
suggest possible things to try. I'd liken it to having an intelligent
rubber duck
(https://en.wikipedia.org/wiki/Rubber_duck_debugging). Whatever, over
the course of a couple of hours, we'd together solved a problem that
had been bugging me for maybe a couple of years. I emphasise together -
ChatGPT's solutions were often wrong in detail, but in explaining what
was wrong helped me see how to solve it, and ChatGPT's comments help
focus thinking along useful pathways. I'd say it has all of the
advantages of pair programming, without the cost (at least if you use
the free tier like I did).

In the second example, I have just enable CodeRabbit as a code
reviewer on my Github pull requests. The code review comments were
probably typical of a human code reviewer - mostly trivial nits - it
remains to be seen if it really pays off by finding a bug that slipped
in. So far, the code change in that PR was typescript, it'll be
interesting see how it fares with C++. What was quite interesting is
that it summarised the pull request. The text summary was actually
spot on - exactly what the code change was for - even though I
commmented very sparsely, as is my wont - but I have to say the
sequence diagram it produced had the timelines in completely the wrong
order. Oh well, can't have everything, I guess.

Github has just enabled free use of Github Copilot in open source
projects, and I have installed a copilot plugin into my editor
(emacs). So let's see if this is actually useful...

Cheers

PGC

unread,
Dec 26, 2024, 11:35:39 PM12/26/24
to Everything List
On Wednesday, December 25, 2024 at 1:46:51 PM UTC+8 Brent Meeker wrote:

I think that LLM thinking is not like reasoning.  It reminds me of my
grandfather who was a cattleman in Texas.  I used to go to auction with
him where he would buy calves to raise and where he would auction off
one's he had raised.  He could do calculations of what to pay, gain and
loss, expected prices, cost of feed all in his head almost instantly. 
But he couldn't explain how he did it.  He could do it pencil and paper
and explain that; but not how he did it in his head.  So although the
arithmetic would be of the same kind he couldn't figure insurance rates
and payouts, or medical expenses, or home construction costs in his
head.  The difference with LLM's is they have absorbed so many examples
on every subject, as my grandfather had of auctioning cattle, that the
LLM's don't have reasoning, they have finely developed intuition, and
they have it about every subject.  Humans don't have the capacity to
develop that level of intuition about more that one or two subjects;
beyond that they have to rely on slow, formal reasoning.

Nail on the head. That’s an interesting story and got me thinking (too much again). This reminds me of the distinction in Kahneman’s bestseller from 2011 between “type 1” and “type 2” thinking (aka “system 1” and “system 2”). And although there are problems/controversies with his ideas/evidence, as is almost standard with psychological proposals of this kind, we can, if you indulge me, take his ideas less literally describing loose cognitive styles of reasoning as follows: 

LLMs today (end of 2024), with their vast memory-like parameter sets, are closer to system 1 “intuitive reasoning styles” that appear startlingly broad and context-sensitive—even "fine", as you say, somewhat related to your grandfather's style of reasoning in their area(s) of expertise. Yet, in my view, LLMs (not wishing to imply anything about your grandfather) are not performing the slower, more deliberative, rule-based thinking we often associate with system 2.

Take the example of Magnus Carlsen, arguably one of the best performing chess players in recent history, relying increasingly on instinctive, “type 1” moves, after thousands of hours of deliberate “type 2” calculation (when he was young and ascending the throne, he was more known for his calculation skills). This illustrates how extensive practice can shift certain skills from a laborious, step-by-step process toward fast, experience-based pattern recognition. I’ve observed many times how he looks at the clock in a time crunch situation and makes decisive, critical, superior computeresque moves, based purely on a more precise and finely tuned gut feeling, than his opponent. Especially with too little time to calculate his way through a situation in recent years. In streams where he comments on other GM’s play, he often instantly blurts out statements like “that knight just belongs on e6” or “the position is screaming for bishop f8” and similar, without a second of thought or looking at the AI/engine evaluation.

A similar thing happens, perhaps not at a world class level (but even Magnus makes mistakes, just less often than others), when a person learns to drive: at first, every procedure (shifting gears, 360 degree checks, mirror vigilance, steering) is deliberate and conscious, whereas a practiced driver performs multiple operations fluently/simultaneously without thinking in type 2 manner. We see that in most advanced tasks—like playing chess, doing math, or solving puzzles—humans merge both system 1 and system 2, but they may rely more on one depending on experience and context.

I see current LLMs as occupying a distinct spot: the “intuition” they rely on is not gained through slow, personal experience in one domain, but rather from the vast  breadth of their pretraining across nearly every field for which text data or code exists. Again, it’s not just static memory and I do recognize the nuance that they are not just memorizing answers to questions or merely content. AGI advocates scream: “LLMs interpolate, so its reasoning!” Unfortunately, they don’t often specify what that means. 

What they primarily memorize, if I understand correctly is functions/programs in a certain way. Programs do generalize to some extent by mathematical definition. When John questions an LLM via his keyboard, he is essentially querying a point in program space, where we can think of the LLM in some domain as a manifold with each point encoding a function/program. Yes, they do interpolate across these manifolds to combine/compose programs which implies an infinite number of possible programs that John can choose from through his keyboard. That is why they appear to reason richly and organically, unlike earlier years and can help debug code (as they have been trained against compositions that yield false results) in sophisticated manners with human assistance.

So what we’re doing with LLMs is that we’re training them as rich flexible models to predict the next token. And if we had infinite memory capacity, we could simply train them to learn a sort of lookup table. But the reality is more modest: LLMs only have some billion or trillion  parameters. That’s why they screw up basic grade school math problems not in their training set or do not manage to remove a single small element John doesn’t want in the complex image he’s generated with his favorite image generator. Therefore it cannot learn a lookup table for every possible input sequence relating to its training data. It is forced to compress. So what these programs “learn” is predictive functions that take and detect the form of vector functions because the LLM is a curve… and the only thing we can encode with a curve is a set of vector functions. These take elements of the entry sequence as inputs and add output elements of what follows. 

Say John feeds it the works of Oscar Wilde for the first time and the LLM has already “learned” a model of the English language. The text that John has inputed is slightly different and yet still the English language. And that’s why it’s so good at emulating linguistic styles and elegance or lack thereof, as its possible to model Oscar Wilde by reusing a lot of functions of learning to model English in general. It therefore becomes trivial to model Wilde’s style by simply deriving a style transfer function that will go from the model of English to the Wilde style texts. That’s how/why people are amazed at its linguistic dexterity in this sense.

That’s why they appear to have an “intuitive” command of everything—law, biology, programming, philosophy—despite lacking the capacity to handle tasks that need deeper, stepwise reasoning or dynamic re-planning in new contexts. This resonates with your grandfather’s “cattle auction” case: he could handle his specialized domain through near-instant intuition but needed pencil-and-paper for general arithmetic outside of it, if I understand correctly. LLMs similarly, may seem “fluent” in many subjects, yet they cannot truly engage in what we would label “system 2 program synthesis”, unless of course the training distribution already covers a close analogue.

In short, the reason LLM outputs feel like “type 1 reasoning on steroids” is that these models have memorized so many examples that their combined intuition extends across nearly all known textual domains. But when a problem truly demands formal reasoning steps absent from their training data, LLMs lack a real “type 2” counterpart—no robust self-critique, no internal program writing, and no persistent memory to refine their logic. We can therefore liken them to formidable intuition machines without the same embedded capacity for system 2, top-down reasoning or architectural self-modification that we see in real human skill acquisition. My big mistake is of course that John would never read Oscar Wilde. 

As a short note to Russell: yes, they appear to be competent at such assistance for similar reasons. I don't see advancements in the coding use case as a function purely of scaling and compute. The more advanced models can assist in such ways because they have longer histories of having suppressed "wrong" function chains/interpolation. Thx for the inspo guys.

Brent Meeker

unread,
Dec 27, 2024, 1:24:42 AM12/27/24
to everyth...@googlegroups.com
On 12/26/2024 8:35 PM, PGC wrote:
> In short, the reason LLM outputs feel like “type 1 reasoning on
> steroids” is that these models have memorized so many examples that
> their combined intuition extends across nearly all known textual
> domains. But when a problem truly demands formal reasoning steps
> absent from their training data, LLMs lack a real “type 2”
> counterpart—no robust self-critique, no internal program writing, and
> no persistent memory to refine their logic. We can therefore liken
> them to formidable intuition machines without the same embedded
> capacity for system 2, top-down reasoning or architectural
> self-modification that we see in real human skill acquisition.

Of course, like humans, an AI based on an LLM could have a formal
reasoning program, like Prolog, and a math program, like Maxima,
appended. But then the trick will be knowing when its type 1 reasoning
isn't accurate enough and switching to type 2, or knowing that type 2
isn't fast enough and switching back to type 1.

Brent

John Clark

unread,
Dec 27, 2024, 8:15:20 AM12/27/24
to everyth...@googlegroups.com
On Thu, Dec 26, 2024 at 11:02 PM PGC <multipl...@gmail.com> wrote:

I’d note first that your analogy with chess and engine moves ignores an asymmetry: a grandmaster can often sense the “non-human” quality of an engine’s play,


 Superhuman would be a better word for that than nonhuman. 
 

whereas engines are not equipped to detect distinctly human patterns unless explicitly trained for that. That’s why a GM plus an engine can typically spot purely artificial play in a way the engine itself cannot reciprocate.

 
Nope. When AlphaGo beat Lee Sedol, the best human GO player in the world, its winning move was move 37. It was such an unusual move that many GO experts at first thought it was a huge blunder, even the people who wrote AlphaGo were worried so they check their readouts because whenever AlphaGo makes a move it automatically estimated the likelihood a human would expect it; and they found that AlphaGo thought there was only one chance in 10,000 of a human making such a move. Today it is generally agreed among GO experts that move 37 was one of if not the most brilliant and creative move in the entire history of the game. AlphaGo knew it was making what you would call a nonhuman move and that everybody else would call a superhuman move.

Even then, the very subtleties grandmasters notice—intuitive plausibility, certain psychological hallmarks—are not easily reduced to a static dataset of “human moves.”

 
AlphaZero is better at Chess and GO than AlphaGo (and better at any two player zero sum game) and it contains NO dataset of human moves, static or otherwise. NOR DOES IT NEED ONE.

Chess self-play is btw trivial to scale because it operates in a closed domain with clear rules.

There are clear rules about what moves in Chess and GO are legal, but there are no clear rules about what moves are good.
 

You can’t replicate that level of synthetic data generation in, say, urban traffic,


The question is moot. Tesla has about 4 million cars on the road and it has been collecting data from them since 2015, so by now it has billions of hours of real traffic data. 

nuclear power plants, surgery, or modern warfare.

Both humans and AIs find nuclear reactor simulators and war games to be very useful, but I grant you that at least right now humans have more experience with surgery thaAIs. Today surgery and nursing care are the only areas of medicine in which humans still have an edge over machines. We already know for a fact that O1 Preview is far better than human doctors at diagnosis, I can only imagine how good O3 will be. 

The bait was luring you to clarify your stance that “a problem is solved or it isn’t” while simultaneously implying that AI failure is more tolerable than human failure. That is self-contradictory.

Yes, that certainly would be self-contradictory IF I had said that an AI error is less serious than a human error, BUT I did not. I did not imply it either although you may have inferred that I did when I said that AIs are constantly getting smarter and thus are constantly producing fewer errors, but human beings are not getting smarter.   

   John K Clark    See what's on my new list at  Extropolis  
ngs

John Clark

unread,
Dec 27, 2024, 9:04:03 AM12/27/24
to everyth...@googlegroups.com
On Thu, Dec 26, 2024 at 11:35 PM PGC <multipl...@gmail.com> wrote:

they [AIs] screw up basic grade school math problems

Here is an example of a problem that AlphaGeometry got correct on the Mathematical Olympiad about a year ago, it's far easier than the questions that O3 got correct on the Frontier Mathematics Test : 

 
"Let ABC be a triangle with AB<AC<BC. Let the incentre and incircle of triangle ABC be I and ω, respectively. Let X be the point on line BC different from C such that the line through X parallel to AC is tangent to ω. Similarly, let Y be the point on line BC different from B such that the line through Y parallel to AB is tangent to ω. Let AI intersect the circumcircle of triangle ABC again at P=A. Let K and L be the midpoints of AC and AB, respectively.
Prove that ∠KIL+∠YPX=180°."

You can find AlphaGeometry's answer at:


John K Clark    See what's on my new list at  Extropolis
mot

Henrik Ohrstrom

unread,
Dec 27, 2024, 10:32:23 AM12/27/24
to everyth...@googlegroups.com

Diagnostics that mostly involve info that have been encoded in a machine readable way will be available for machine use. However it will take some time to encode age of clothes, smell, choice of clothes and the gut feeling that i can't quantity as anything other than "sick person".
Bad data will be a big problem for ai diagnostics for a long time. You can count on one thing and that is the extremely shoddy data in all historical databases.
It is really bad, som bad that it is a problem for human interpretation of the data even when you have the patient with you to "correct" errors in the Medical records.
When those problems have been corrected we are no doubt of for a bright future.
For suitable diagnostics it is however already not if AI is beneficial, it is rather how do you train your AI to fit your patient specifics.

/Henrik


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages