Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Using neural networks to solve advanced mathematics equations

119 views
Skip to first unread message

Peter Luschny

unread,
Jan 15, 2020, 5:01:39 AM1/15/20
to
"Facebook AI has built the first AI system that can solve advanced mathematics equations using symbolic reasoning."

https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/

j4n bur53

unread,
Jan 17, 2020, 7:50:49 AM1/17/20
to
Whats so "first" about this. Is this a joke?

Peter Luschny schrieb:

Dr Huang

unread,
Aug 17, 2020, 4:24:16 PM8/17/20
to
how can i try it?

mathHand.com

nob...@nowhere.invalid

unread,
Aug 19, 2020, 1:48:21 PM8/19/20
to

Dr Huang schrieb:
>
> how can i try it?
>

Maybe start with the paper and software by Lample and Charton? They let
a neural network loose on indefinite integrals and ODE's, however.

Their 2019 paper:

<https://arxiv.org/abs/1912.01412>

Their github repository:

<https://github.com/facebookresearch/SymbolicMathematics>

I remain sceptical though.

Martin.

Peter Luschny

unread,
Sep 5, 2020, 5:49:12 PM9/5/20
to
Symbolic Mathematics Finally Yields to Neural Networks

I quote:

"Lample and Charton’s program could produce precise solutions to complicated integrals and differential equations — including some that stumped popular math software packages with explicit problem-solving rules built in."

"The new program exploits one of the major advantages of neural networks: They develop their own implicit rules. As a result, “there’s no separation between the rules and the exceptions,” said Jay McClelland, a psychologist at Stanford University who uses neural nets to model how people learn math. In practice, this means that the program didn’t stumble over the hardest integrals. In theory, this kind of approach could derive unconventional “rules” that could make headway on problems that are currently unsolvable."

https://www.quantamagazine.org/symbolic-mathematics-finally-yields-to-neural-networks-20200520/

nob...@nowhere.invalid

unread,
Nov 22, 2020, 3:26:29 PM11/22/20
to

Peter Luschny schrieb:
>
> Symbolic Mathematics Finally Yields to Neural Networks
>
> I quote:
>
> "Lample and Charton's program could produce precise solutions to
> complicated integrals and differential equations - including some
> that stumped popular math software packages with explicit
> problem-solving rules built in."
>
> "The new program exploits one of the major advantages of neural
> networks: They develop their own implicit rules. As a result,
> "there's no separation between the rules and the exceptions," said
> Jay McClelland, a psychologist at Stanford University who uses neural
> nets to model how people learn math. In practice, this means that the
> program didn't stumble over the hardest integrals. In theory, this
> kind of approach could derive unconventional "rules" that could make
> headway on problems that are currently unsolvable."
>

<https://www.quantamagazine.org/symbolic-mathematics-finally-yields-to-neural-networks-20200520/>

My sceptical attitude is borne out by results of experiments with the
Lample-Charton code that were posted by Qian Yun on the <fricas-devel>
newsgroup in a thread started on November 16, 2020 and named "the 'deep
learning' 'neural network' symbolic integrator":

<https://www.mail-archive.com/fricas...@googlegroups.com/msg13743.html>

Qian's conclusions are (DL = Deep Learning):
>
> 1. It doesn't handle large numbers very well. [...]
>
> 2. DL may give correct result that contains strange constant. [...]
>
> 3. DL doesn't understand multiplication very well. [...]
>
> 4. DL doesn't handle long expression very well. [...]
>
> 5. For the FWD test set with 9986 integrals, (which is generate
> random expression first, then try to solve with sympy and discard
> failures) FriCAS can solve 9980 out of 9986 in 71 seconds, of the
> remaining 6 integrals, FriCAS can solve another 2 under 100 seconds,
> [...] The DL system can solve 95.6%, by comparison FriCAS is over
> 99.94%.
>
> 6. The DL system is slow. To solve the FWD test set, the DL system
> may use around 100 hours of CPU time.
>
> 7. For the BWD test set, (which is generate random expression first,
> then take derivatives as integrand), FriCAS can roughly solve 95%.
> Compared with DL's claimed 99.5%. [...]
>
> 8. DL doesn't handle rational function integration very well. It can
> handle '(x+1)^2/((x+1)^6+1)' but not its expanded form. [...]
>
> 9. DL doesn't handle algebraic function integration very well. I have
> a list of algebraic functions that FriCAS can solve while other CASs
> can't, DL can't solve them as well.
>
> 10. For the harder mixed-cased integration, I have a list of
> integrations that FriCAS can't handle, DL can't solve them as well.
>

Martin.

Nasser M. Abbasi

unread,
Dec 1, 2020, 9:39:55 AM12/1/20
to
FYI,

Well, deep learning/AI just solved the 50-year-old grand challenge in biology,
the "protein folding problem".

So I am sure one day, it will be able to fully solve integration as well?

https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology

The AI system which did this is called alphafold.

--Nasser

Richard Fateman

unread,
Dec 1, 2020, 2:27:50 PM12/1/20
to
Who needs neural networks?
Let's assume that there is a grammar that describes completely all possible integrands
using some standard character set.
(This is quite plausible).
Now set in motion a program to generate, in alphabetical order, and in size place, all
integrands.
Compute via Fricas, Rubi, Mathematica, Maple, Maxima ... the indefinite integral,
if possible, and store it in a table... [ [integrand_i, result_i] , ....]
Exponentially expensive to compute, and store, but who is counting?

We could also additionally do this: generate all possible integrand answers, and, after
differentiating, store that in the same table. Problem here is that differentiating an
expression does not provide a unique simplified answer. On the other hand,
what if DL or any other program is asked to integrate "x-x" ? Is it supposed to
know that is the same as integrating "0"? On the plus side, we have reduced the
integration problem to a simplification problem. Namely,
for p in table_of_integrands do if simplify(p[1] -input)==0 then p[2] ;

We know that the simplification problem is recursively undecidable, so
there is that problem. Oh, the DL version of integration has the same
flaw, and from the examples posted, where ridiculous constants appear,
it seems that it's truly an in-your-face defect.

Maybe there should be an attempt at the much more fundamental
problem of building a DL that will take any expression and
(a) simplify it
or
(b) just tell you if it is identically zero. [with exponential time and space a
solution to this will also provide a simplifier -- if we agree that the simplest
expression is the shortest alphabetically-ordered lower expression...)




It is possibly worth observing that definite integrals with parameters are vastly
more useful (look in reference books) than indefinite integrals, and so this whole
exercise is perhaps not so interesting to applied mathematicians.
Also note that definite integrals (if all
extra parameters are set) can generally be done very nicely by numerical
quadrature, and with suitable tables and plotting, extra " dimensions" for those
parameters may also be computed.

As for comparing systems , I am reminded of a (true story) from MIT when
Prof. Hubert Dreyfus, a critic of AI who said that computers could never play
chess because it was too difficult, was beaten by a program, MacHack
( https://en.wikipedia.org/wiki/Richard_Greenblatt_(programmer) ).
Dreyfus said "My brother is a better chess player".
.....
RJF

Richard Fateman

unread,
Dec 1, 2020, 2:49:35 PM12/1/20
to
On Tuesday, December 1, 2020 at 11:27:50 AM UTC-8, Richard Fateman wrote:
> Who needs neural networks?
one last item. From the evidence posted on fricas-devel, it is apparent that
DL can't do arithmetic. Given n it appears that it cannot compute n+1, in general.

I don't know if this is susceptible to a proof. I could ask around...

RJF

Richard Fateman

unread,
Dec 1, 2020, 3:06:00 PM12/1/20
to
see https://arxiv.org/pdf/1904.01557.pdf

the answer seems to be that you should not expect neural networks to do
arithmetic. In this paper, there are lots of ambitious tasks, but
1+1+1+1+1+1+1 gets 6 instead of 7.

apologies for responding to my own post..
> RJF

nob...@nowhere.invalid

unread,
Dec 3, 2020, 4:08:12 AM12/3/20
to

"Nasser M. Abbasi" schrieb:
> FYI,
>
> Well, deep learning/AI just solved the 50-year-old grand challenge in
> biology, the "protein folding problem".
>
> So I am sure one day, it will be able to fully solve integration as
> well?
>
> https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology
>
> The AI system which did this is called alphafold.
>

I looked at the alphafold article at <deepmind.com>, but haven't dug
deeper. Apparently, protein-folding theorists are unable to estimate a
folded protein's configuration energy sufficiently quickly, else
simulated annealing (as used to effectively solve the Travelling
Salesman's problem) would allow to make good folding predictions. But
training a neural network on a library of 1.7*10^5 experimental protein
structures was now found to yield a good folding predictor. Trying to
replicate this feat in one resarcher's head would presumably need more
than a lifetime of experience with the library data.

According to Lample and Charton's paper at arXiv, symbolic parameters
were excluded from their FWD and BWD integration test sets - might
there be particular problems with them? (By the way, Maple's algebraic
Risch integrator also appears to reject symbolic parameters.) In this
situation, some mean-square numerical deviation of a trial solution's
derivative from the integrand could perhaps be used for a simulated-
annealing approach to symbolic integration, however. But could reliable
deviation estimates be computed sufficiently quickly? Something for
Nasser to try!

Martin.

Richard Fateman

unread,
Dec 3, 2020, 2:55:22 PM12/3/20
to
You cannot use numerical evaluation to tell if a symbolic indefinite integral is correct, since there are an arbitrary number of correct solutions that differ by a constant. Maybe you first differentiate the answer..

It's pretty obvious that omitting symbolic parameters vastly simplifies the problem.
In fact, if you have no symbolic parameters and you are doing DEFINITE integration and
you are allowing numerically "close" answers, then the problem reduces to numerical
quadrature, a well-studied problem with many excellent programs.

It seems to me that the computer algebra community has substantially
agreed that this approach is bogus, and the apparent fact that this paper
is still out there, perhaps being cited by the uneducated press, is
unfortunate. Maybe a tribute to the lack of visibility of this community,
or the sad credibility of AI=machine learning = it is just a matter of time
before it can do everything.

If it requires enumerating all problems of interest and their solutions, it
is just a table lookup. Solving all integration problems that can be expressed
in (say) 90 characters, uh, maybe. but the current system cannot even
integrate x^n where n is an integer, if n is too large. So this DL is
bogus. Is DL on this task inevitably bogus? Is DL unable to compute
n+1 given an integer n? If true, that would disqualify it, I think.

RJF

Nasser M. Abbasi

unread,
Feb 24, 2021, 9:05:14 PM2/24/21
to
On Wednesday, January 15, 2020 at 4:01:39 AM UTC-6, peter....@gmail.com wrote:
> "Facebook AI has built the first AI system that can solve advanced mathematics equations using symbolic reasoning."
>
> https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/

FYI,

They are now working on using AI to solve PDE's

https://www.infoq.com/news/2020/12/caltech-ai-pde/

"Caltech Open-Sources AI for Solving Partial Differential Equations"

"The Caltech team's approach is to build a neural network that can learn a solution operator; that is, it learns the mapping between a PDE and its solution."

--Nasser

nob...@nowhere.invalid

unread,
Feb 28, 2021, 1:44:48 PM2/28/21
to

"Nasser M. Abbasi" schrieb:
I think there is no reason why a neural network should work better here
than for ODE's or plain integrals. And real-world PDE's are likely
involve free parameters too.

Martin.

drhu...@gmail.com

unread,
May 17, 2021, 6:26:52 AM5/17/21
to
the article said:" training data sets totaling about 200 million (tree-shaped) equations and solutions. ....... but it was slightly less successful at ordinary differential equations."

can PC handle 200 million data? but still less successful at ode?
mathHand.com size is about 3 m byte, still can successful at ode, pde, fde (fractional differential eq) and more.
0 new messages