I know of no reason that suggests it is impossible to combine methods as you indicate.
There may be other viable approaches, and the question is whether it is worthwhile to pursue
this (or other) approaches.
Here's another approach (not original with me). You can write out the proofs of all mathematical
theorems by setting a huge number of monkeys up with keyboards, and having them type randomly.
They will, as is well known, produce all of Shakespeare's plays. But all the theorems, too. And their proofs.
Now today, we could simulate the monkeys with GPUs, making this way way easier.
Now if you are using NNs to choose tactics, I would guess that would be more viable than the
monkeys. But would you (say) invest money in a company pursuing this?
In my experience, humans have puzzled over various tactics in computer algebra system design,
such as ... of the 5 or so polynomial greatest common divisor algorithms implemented in
Maxima, which should we use for particular polynomials p, q? Do you think a NN could be
trained for this decision? As I recall, the decision for optimally computing g=GCD(p,q)
is easy if you know all of g,p,q. If you don't have g, there are a few obvious cheap
heuristics that are easy for a human to figure out, but they are not of general use.
Now I can't prove it is impossible to come up with a NN solution --- in fact if you have
many monkeys, you can generate all polynomials of length 1: a,b,c,d,....z,0,1,2,3,4....9
All polynomials of length 2 [just 2 digit numbers .. 10,11,....,99], All polynomials of length 3
[2*a, a+1, a+2, ....a+9, a+b, ... z+9,...] . You might have clever monkeys that do this in strictly alphabetical
order. You can then compute all the GCD of all pairs of polynomials with fewer than X characters each
using each of the candidate GCD algorithms, and remember which one was fastest. Then when
you need to compute a GCD, you can expect that the NN will tell you which algorithm is fastest.
Of course, the NN might also just tell you the answer GCD.
This exhaustive search through all pairs of syntactically valid polynomials ... uh, maybe not
the right task??
So how did humans choose the right algorithm? They surmise from various perspectives which
algorithm is likely to be fastest, do some random tests, and maybe allow the user to choose
another algorithm based on special application-specific knowledge. For instance, if you
know in advance that g=GCD(p,q) is very likely to be 1, you use one algorithm. If you
expect that g is of degree almost as large as the degree of p or q, you use another.
Now if there are a set of places in a CAS where a NN can make a big difference in
tactical decisions based on the mathematical inputs to an algorithm, I would be
interested in learning of them. The neurosymbolic effort seems aimed at designing
a framework -- a meta system to support the joining of distinct modes. Eh, no
argument in principle. Is it better to just do Alpha 0 or to have an assistant that
says "don't bother- we know this branch is infeasible for algorithmic reasons".
I vote for the one with the assistant.
But of the target applications, none of them look towards making the
tactical decisions in a computer algebra system. Or even where those are.
Typically, if you are really really at a loss as to whether to do Plan A or Plan B
and you know one of them is going to be 100x faster, you do them both,
sharing resources. The first one to finish shuts down the other process.
How bad could this be? Running 2 processes, even interleaved on one CPU...
it should take about 2X the fastest one.
For the actual neurosymbolic application areas I suspect there may be many more
than 2 plans, or some other important difficulties. And then combining
strategies may be a winner.
But we are talking about computer math here, and what decisions do you think
a NN can make, based on analysis of input?