On 6/23/20,
guyv...@gmail.com <
guyv...@gmail.com> wrote:
> No, you didn't answer my question, you just repeated your previous
> statement.
> Why do we need an NN for this, why can't we just compare the results to a
> deeper stockfish evaluation itself instead of to an NN trained to behave
> like a deeper stockfish evaluation?
--oh. Well the point is, the SF eval is not capable of predicting the
looked-ahead SF eval,
in general, or anyhow already is doing that about as well as it can
thanks to tuning.
But the NN finds things about the looked-ahead SF eval, which it really
*is* capable of predicting algorithmically fairly effectively.
Maybe some predictions wrong, but overall the successes
outweigh the failures and pay off. But those things, if an NN can do them,
maybe a non-NN can also do them (by adding new eval terms).
So the approach I am suggesting sort of automatically finds useful ideas
for SF-eval-designers.