64 views

Skip to first unread message

Feb 2, 2022, 4:50:42 PM2/2/22

to

Richard Fateman asked this recently.

They are probably closer than you think, if they are not already here.

Kevin Buzzard today said: "You know how these AI people say that they're going to get 'Lean' to solve an IMO problem automatically? And we math people know this will never happen because IMO problems are hard."

According to this new blog post of the OpenAI people they solved not one but two:

"Solving (Some) Formal Math Olympiad Problems"

https://openai.com/blog/formal-math/

They are probably closer than you think, if they are not already here.

Kevin Buzzard today said: "You know how these AI people say that they're going to get 'Lean' to solve an IMO problem automatically? And we math people know this will never happen because IMO problems are hard."

According to this new blog post of the OpenAI people they solved not one but two:

"Solving (Some) Formal Math Olympiad Problems"

https://openai.com/blog/formal-math/

Feb 2, 2022, 8:01:48 PM2/2/22

to

Last I heard.

so it can't integrate x^some_explicit_large_integer.

You can prove some of the theorems some of the time.

but...

Did I just prove NN can't integrate even polynomials?

RJF

Feb 3, 2022, 1:53:44 AM2/3/22

to

Of course this is false, AIs can do high precision arithmetics.

Just like us NNs can hallucinate, dream, do math and classical

counterexample is this breakthrough.

https://www.chemistryworld.com/news/no-more-worrying-about-nomenclature-ai-will-tell-you-what-that-chemical-is-called/4014170.article

Try it: https://app.syntelly.com/smiles2iupac

SMILES desciption of chemicals requires very complex math and

guys specifically archived this level.

Also, Google recently managed to emulate emotions in NNs...

Also, I do not think you quite understand the level of modern NNs,

if they are in the chip, for example.

https://datascience.stackexchange.com/a/19039/128423

Just like us NNs can hallucinate, dream, do math and classical

counterexample is this breakthrough.

https://www.chemistryworld.com/news/no-more-worrying-about-nomenclature-ai-will-tell-you-what-that-chemical-is-called/4014170.article

Try it: https://app.syntelly.com/smiles2iupac

SMILES desciption of chemicals requires very complex math and

guys specifically archived this level.

Also, Google recently managed to emulate emotions in NNs...

Also, I do not think you quite understand the level of modern NNs,

if they are in the chip, for example.

https://datascience.stackexchange.com/a/19039/128423

Feb 3, 2022, 6:13:40 AM2/3/22

to

Another viable approach is combining neural (AI) models and symbolic methods, e.g. http://www.neurosymbolic.org/

You don't need the NN to add numbers or integrate polynomials. But a NN can be useful to tell you "OK, try this tactic now, it may be useful here". Where the tactic is executed by some formal method (e.g., Lean or Rubi rule, etc.).

Moreover, there has also been some progress on the competitive programming side, not only competitive mathematics: https://alphacode.deepmind.com/

Is there any strong argument *against* trying to combine the best of both worlds?

You don't need the NN to add numbers or integrate polynomials. But a NN can be useful to tell you "OK, try this tactic now, it may be useful here". Where the tactic is executed by some formal method (e.g., Lean or Rubi rule, etc.).

Moreover, there has also been some progress on the competitive programming side, not only competitive mathematics: https://alphacode.deepmind.com/

Is there any strong argument *against* trying to combine the best of both worlds?

Feb 23, 2022, 11:31:12 PM2/23/22

to

On Wednesday, February 2, 2022 at 10:53:44 PM UTC-8, val.za...@gmail.com wrote:

> Of course this is false, AIs can do high precision arithmetics.

You fail to prove your assertion. I don't know what you mean by "AIs" but if they are neural networks, you are simply wrong.
> Of course this is false, AIs can do high precision arithmetics.

Feb 24, 2022, 12:10:37 AM2/24/22

to

I know of no reason that suggests it is impossible to combine methods as you indicate.

There may be other viable approaches, and the question is whether it is worthwhile to pursue

this (or other) approaches.

Here's another approach (not original with me). You can write out the proofs of all mathematical

theorems by setting a huge number of monkeys up with keyboards, and having them type randomly.

They will, as is well known, produce all of Shakespeare's plays. But all the theorems, too. And their proofs.

Now today, we could simulate the monkeys with GPUs, making this way way easier.

Now if you are using NNs to choose tactics, I would guess that would be more viable than the

monkeys. But would you (say) invest money in a company pursuing this?

In my experience, humans have puzzled over various tactics in computer algebra system design,

such as ... of the 5 or so polynomial greatest common divisor algorithms implemented in

Maxima, which should we use for particular polynomials p, q? Do you think a NN could be

trained for this decision? As I recall, the decision for optimally computing g=GCD(p,q)

is easy if you know all of g,p,q. If you don't have g, there are a few obvious cheap

heuristics that are easy for a human to figure out, but they are not of general use.

Now I can't prove it is impossible to come up with a NN solution --- in fact if you have

many monkeys, you can generate all polynomials of length 1: a,b,c,d,....z,0,1,2,3,4....9

All polynomials of length 2 [just 2 digit numbers .. 10,11,....,99], All polynomials of length 3

[2*a, a+1, a+2, ....a+9, a+b, ... z+9,...] . You might have clever monkeys that do this in strictly alphabetical

order. You can then compute all the GCD of all pairs of polynomials with fewer than X characters each

using each of the candidate GCD algorithms, and remember which one was fastest. Then when

you need to compute a GCD, you can expect that the NN will tell you which algorithm is fastest.

Of course, the NN might also just tell you the answer GCD.

This exhaustive search through all pairs of syntactically valid polynomials ... uh, maybe not

the right task??

So how did humans choose the right algorithm? They surmise from various perspectives which

algorithm is likely to be fastest, do some random tests, and maybe allow the user to choose

another algorithm based on special application-specific knowledge. For instance, if you

know in advance that g=GCD(p,q) is very likely to be 1, you use one algorithm. If you

expect that g is of degree almost as large as the degree of p or q, you use another.

Now if there are a set of places in a CAS where a NN can make a big difference in

tactical decisions based on the mathematical inputs to an algorithm, I would be

interested in learning of them. The neurosymbolic effort seems aimed at designing

a framework -- a meta system to support the joining of distinct modes. Eh, no

argument in principle. Is it better to just do Alpha 0 or to have an assistant that

says "don't bother- we know this branch is infeasible for algorithmic reasons".

I vote for the one with the assistant.

But of the target applications, none of them look towards making the

tactical decisions in a computer algebra system. Or even where those are.

Typically, if you are really really at a loss as to whether to do Plan A or Plan B

and you know one of them is going to be 100x faster, you do them both,

sharing resources. The first one to finish shuts down the other process.

How bad could this be? Running 2 processes, even interleaved on one CPU...

it should take about 2X the fastest one.

For the actual neurosymbolic application areas I suspect there may be many more

than 2 plans, or some other important difficulties. And then combining

strategies may be a winner.

But we are talking about computer math here, and what decisions do you think

a NN can make, based on analysis of input?

There may be other viable approaches, and the question is whether it is worthwhile to pursue

this (or other) approaches.

Here's another approach (not original with me). You can write out the proofs of all mathematical

theorems by setting a huge number of monkeys up with keyboards, and having them type randomly.

They will, as is well known, produce all of Shakespeare's plays. But all the theorems, too. And their proofs.

Now today, we could simulate the monkeys with GPUs, making this way way easier.

Now if you are using NNs to choose tactics, I would guess that would be more viable than the

monkeys. But would you (say) invest money in a company pursuing this?

In my experience, humans have puzzled over various tactics in computer algebra system design,

such as ... of the 5 or so polynomial greatest common divisor algorithms implemented in

Maxima, which should we use for particular polynomials p, q? Do you think a NN could be

trained for this decision? As I recall, the decision for optimally computing g=GCD(p,q)

is easy if you know all of g,p,q. If you don't have g, there are a few obvious cheap

heuristics that are easy for a human to figure out, but they are not of general use.

Now I can't prove it is impossible to come up with a NN solution --- in fact if you have

many monkeys, you can generate all polynomials of length 1: a,b,c,d,....z,0,1,2,3,4....9

All polynomials of length 2 [just 2 digit numbers .. 10,11,....,99], All polynomials of length 3

[2*a, a+1, a+2, ....a+9, a+b, ... z+9,...] . You might have clever monkeys that do this in strictly alphabetical

order. You can then compute all the GCD of all pairs of polynomials with fewer than X characters each

using each of the candidate GCD algorithms, and remember which one was fastest. Then when

you need to compute a GCD, you can expect that the NN will tell you which algorithm is fastest.

Of course, the NN might also just tell you the answer GCD.

This exhaustive search through all pairs of syntactically valid polynomials ... uh, maybe not

the right task??

So how did humans choose the right algorithm? They surmise from various perspectives which

algorithm is likely to be fastest, do some random tests, and maybe allow the user to choose

another algorithm based on special application-specific knowledge. For instance, if you

know in advance that g=GCD(p,q) is very likely to be 1, you use one algorithm. If you

expect that g is of degree almost as large as the degree of p or q, you use another.

Now if there are a set of places in a CAS where a NN can make a big difference in

tactical decisions based on the mathematical inputs to an algorithm, I would be

interested in learning of them. The neurosymbolic effort seems aimed at designing

a framework -- a meta system to support the joining of distinct modes. Eh, no

argument in principle. Is it better to just do Alpha 0 or to have an assistant that

says "don't bother- we know this branch is infeasible for algorithmic reasons".

I vote for the one with the assistant.

But of the target applications, none of them look towards making the

tactical decisions in a computer algebra system. Or even where those are.

Typically, if you are really really at a loss as to whether to do Plan A or Plan B

and you know one of them is going to be 100x faster, you do them both,

sharing resources. The first one to finish shuts down the other process.

How bad could this be? Running 2 processes, even interleaved on one CPU...

it should take about 2X the fastest one.

For the actual neurosymbolic application areas I suspect there may be many more

than 2 plans, or some other important difficulties. And then combining

strategies may be a winner.

But we are talking about computer math here, and what decisions do you think

a NN can make, based on analysis of input?

Oct 5, 2022, 7:35:00 PM10/5/22

to

"I don't know what you mean by "AIs" but if they are neural networks, you are simply wrong."

You were saying? This neural network finds algorithm to multiply 4x4 x 4x4 matrices with just 47 multiplications.
Before 48 was best, with naive algorithm being n^3, so 4^3, so 64. https://www.nature.com/articles/s41586-022-05172-4/figures/6

https://math.stackexchange.com/questions/578342/number-of-elementary-multiplications-for-multiplying-4x4-matrices/4546116#4546116

Reply all

Reply to author

Forward

0 new messages

Search

Clear search

Close search

Google apps

Main menu