stockfishNNUE vs others (TCEC 18 bonus)

1440 views
Skip to first unread message

Warren D Smith

unread,
Jul 14, 2020, 9:25:12 AM7/14/20
to FishCooking
StockfishNNUE currently playing a special exhibition match vs the top 4 TCEC opponents
alliestein, stockfish, lczero,and stoofvleesII, with 56 games planned.
SFNNUE (approximate description) uses a small neural net for eval, much smaller+faster than
LcZero, Stoofv, and AllieStein; combined with SF's search. 
So far with 38 games, SFNNUE is holding its own with 20 points which is not quite the top result
in the present standings (AllieStein), but is presently ahead of ordinary stockfish.

Typical node rates:
SF: 125 to 170 Mnps.    Fastest in endgames.
SFNNUE: 38 to 134 Mnps.  Fastest in endgames.
AllieStein 200 to 760 knps.
LcZero: 34 to 85 knps.
StoofvleesII: 14 to 73 knps.

Based on this It seems to me quite plausible that SFNNUE will become stronger than SF and ought to replace it;
or that some hybrid of SFNNUE with ordinary stockfish will become the strongest.


garrykli...@gmail.com

unread,
Jul 14, 2020, 11:14:49 AM7/14/20
to FishCooking
 DArnit! Just woke up and my mind read it as 'stockfishNN vs others (TCEC 18 SCAM bonus).

thought you'd made an amazing inverted version of Annabelle Reese Bailey(arb!)

vizve...@gmail.com

unread,
Jul 14, 2020, 11:59:23 AM7/14/20
to FishCooking
what is a hybrid of "SF and SF NNUE" is if SF NNUE is just SF with small NN as eval and everything else is kinda equal?
it probably can just become usual sf it it will get signifficantly stronger.

вторник, 14 июля 2020 г. в 16:25:12 UTC+3, warre...@gmail.com:

Warren D Smith

unread,
Jul 14, 2020, 3:09:50 PM7/14/20
to FishCooking
On 7/14/20, vizve...@gmail.com <vizve...@gmail.com> wrote:
> what is a hybrid of "SF and SF NNUE" is if SF NNUE is just SF with small NN
>
> as eval and everything else is kinda equal?
> it probably can just become usual sf it it will get signifficantly
> stronger.

--1. Pure SF can try to learn from SF-NNUE by: for lot of chess positions,
find ones with large |NNUE - SFeval|, stare at them, and find patterns+ideas;
then use those ideas to improve SFeval so this difference decreases
and SF hopefully plays smarter with very little change in speed..

2. Add the NNUE eval to the SF code as Eval2 in addition to SF's original Eval1.
Eval2 will be slower than Eval1, apparently by a factor of 2, but
hopefully smarter,
at least in some situations.

The question then is: how should SF exploit the availability of both
Eval1 & Eval2?
There are numerous possible ways to try.

One idea would be if (say) NNUE is judged better than SF in "openings" but SF is
better in "endings" (I have no idea if that is true, or what flavor of
that is true,
but probably something like it is true) just switch from Eval2 to
Eval1 when gamestage reaches some threshold.

Another well known idea is the "lazy" approach. Regard Eval2 as the "true" eval
and Eval1 as a fast approximation to it. Call eval1, and if the
resulting value x is
inside (alpha-epsilon, beta+epsilon), then you need more accuracy, so call Eval2
to compute the better x and use it. Otherwise Eval1 was good enough.

Both those approaches could be done at same time, then you'd be doing
the lazy trick
in openings and ordinary SF (Eval1 only) in endings.

--
Warren D. Smith
http://RangeVoting.org <-- add your endorsement (by clicking
"endorse" as 1st step)

Warren D Smith

unread,
Jul 14, 2020, 3:23:34 PM7/14/20
to FishCooking
At present Alliestein is leading and SFNNUE is in 2nd place in the TCEC bonus
tourney, ordinary SF is tied with LcZero for 3+4 places.
The differences between these four actually are not statistically
significant because
the tourney is too few games, but StoofvleesII looks genuinely weaker
than these 4.

SFNNUE probably will keep getting better the more its neural net is
trained, so it might
already be better than SF. Or soon will be. Or not. Anyhow, it
plainly is in the same
ballpark and it seems to me NNUE is a gold mine for SF to try to
exploit. The other way to look at that is SF is a gold mine for
SFNNUE to exploit, in fact SFNNUE already is exploiting SF by using it
to provide training data. In that view the best hybrid of SFNNUE and
SF might just be "SFNNUE" and plain SF then would be relegated to a
"teacher/coach" role behind the scenes.

Nickolas

unread,
Jul 15, 2020, 2:16:19 AM7/15/20
to FishCooking
People have been pronouncing Stockfish dead for years, and yet ...

Realistically, SFNNUE probably isn't yet quite the right architecture to dethrone Stockfish on actual CPUs or at CPU TDPs. Stockfish's evaluation function is tuned to its search and its search is tuned to its evaluation function. Even considering the limitations of human ingenuity in terms of crafting ever more sophisticated evaluation terms and search heuristics, that factor alone probably means it will be hard to achieve an overall improvement simply by dropping in a replacement for one of Stockfish's components.

Now if you can train NNUEs (or something like them) in tandem to do it all -- eval, search, quiescence, etc. -- that's probably capable of mostly eliminating the need for humans, but it seems like no one's put all the pieces together yet. (Maybe we haven't yet figured out good training signals for NN-guided AB-style search? I don't know, I haven't really looked into it.)

Additionally, NNUE evaluation functions still suffer from the same sort of bloat that traditional NN evaluation functions do, in that you have to consider the output of the whole network to get a sensible result. Now, NNUEs certainly don't need to recalculate the entire network every time changes are made to a position -- hence them being "efficiently updating" -- but the whole network is still ultimately used in the computation at each node. There are no doubt significant gains to be made by transitioning to a more hierarchical architecture that calculates a little bit, and uses that initial result to determine whether to spend any more cycles on further calculation, then calculates a little bit more, etc. (Not at all dissimilar to the various thresholds Stockfish currently uses to stop evaluation of a node early.)

F P

unread,
Jul 15, 2020, 6:06:59 AM7/15/20
to FishCooking
https://workupload.com/file/ggEUrvNVgmH

Use these binaries + net and say that again.

>Realistically, SFNNUE probably isn't yet quite the right architecture to dethrone Stockfish on actual CPUs or at CPU TDPs.

Warren D Smith

unread,
Jul 15, 2020, 8:13:14 AM7/15/20
to Nickolas, FishCooking
On 7/15/20, Nickolas <nic...@gmail.com> wrote:
> People have been pronouncing Stockfish dead for years, and yet ...
>
> Realistically, SFNNUE probably isn't yet quite the right architecture to
> dethrone Stockfish on actual CPUs or at CPU TDPs. Stockfish's evaluation
> function is tuned to its search and its search is tuned to its evaluation
> function. Even considering the limitations of human ingenuity in terms of
> crafting ever more sophisticated evaluation terms and search heuristics,
> that factor alone probably means it will be hard to achieve an overall
> improvement simply by dropping in a replacement for one of Stockfish's
> components.

--Despite its pathetic search untuned to eval,
SF-NNUE currently in clear FIRST PLACE with 52% score, all others
(SF, Lc0, Allie, Stoof) with <=50%.

So golly gee, just imagine if you actually did tune the search.
Thank you ever so much for making my argument for me.

> Additionally, NNUE evaluation functions still suffer from the same sort of
> bloat that traditional NN evaluation functions do, in that you have to
> consider the output of the whole network to get a sensible result. Now,
> NNUEs certainly don't need to *recalculate *the entire network every time
> changes are made to a position -- hence them being "efficiently updating"
> -- but the whole network is still ultimately used in the computation at
> each node. There are no doubt significant gains to be made by transitioning
> to a more hierarchical architecture that calculates a little bit, and uses
> that initial result to determine whether to spend any more cycles on
> further calculation, then calculates a little bit more, etc. (Not at all
> dissimilar to the various thresholds Stockfish currently uses to stop
> evaluation of a node early.)

--NNUE eval is slower than SF's by about a factor of 2.
It is quite conceivable some of the improvements NNUE learned, can be
rewritten in a non-NN style, then added to SF's eval to gain the smarts while
paying almost no price in speed. That is one idea I would recommend trying.

Also, it is possible that NNUE has learned some stuff that cannot be
thus understood and translated, and hence NNUE simply will be better than SF.
But if so, hybridization ideas such as "lazy eval" may be able to
improve further
by tapping into plain-SF's extra speed whenever NNUE is enough
outside the alpha beta window.

Warren D Smith

unread,
Jul 15, 2020, 8:43:10 AM7/15/20
to Nickolas, FishCooking
> Also, it is possible that NNUE has learned some stuff that cannot be
> thus understood and translated, and hence NNUE simply will be better than
> SF.
> But if so, hybridization ideas such as "lazy eval" may be able to
> improve further
> by tapping into plain-SF's extra speed whenever NNUE is enough
> outside the alpha beta window.

--my "lazy eval" idea may not work for the following reason. SF-NNUE
is "incremental"
that is, the first layer of the neural net is updated after every move
or unmove.
The point is, you cannot turn that off via "lazy"ness. So you are going to pay
a speed price, the move-making function is going to be slower.

So it might be that SF-NNUE is not improvable via "lazy"ness plus the
plain-SF eval.

Nickolas

unread,
Jul 15, 2020, 4:36:18 PM7/15/20
to FishCooking
On Wednesday, July 15, 2020 at 5:06:59 AM UTC-5, F P wrote:
https://workupload.com/file/ggEUrvNVgmH

Use these binaries + net and say that again.


I don't have the resources to run statistically significant comparisons. I'm basing my opinions on results seen so far at TCEC and CCCC, which have both played a fair number of games with SFNNUE in various settings. So far, I've seen no results to suggest that SFNNUE is on par with Stockfish. (It's lost every head-to-head match, performance against the same opponents is inferior, etc.)

For example, Stockfish (with contempt 50) recently had a performance around 150 Elo higher than Komodo in a TCEC bonus, while currently at CCCC, SFNNUE only has a performance around 50 Elo higher than Komodo. The results aren't directly comparable, as there are different time controls, different hardware, slightly different versions, etc., but as I said, so far I've seen no evidence that SFNNUE is on par with Stockfish.

SFNNUE's performance in the current TCEC gauntlet is impressive, but also a very small sample size.

Nickolas

unread,
Jul 15, 2020, 4:51:38 PM7/15/20
to FishCooking
On Wednesday, July 15, 2020 at 7:13:14 AM UTC-5, Warren D Smith wrote:
--Despite its pathetic search untuned to eval,
SF-NNUE currently in clear FIRST PLACE with 52% score, all others
(SF, Lc0, Allie, Stoof) with <=50%.

So golly gee, just imagine if you actually did tune the search.
Thank you ever so much for making my argument for me.

Even when you wrote this, it wasn't true. First, TCEC is currently running a gauntlet, not a tournament. That's why SFNNUE has played four times as many games as the other engines, and none of the other engines have games against each other. There are no "places" in gauntlets, and thus SFNNUE can't be in first place.

Second, SFNNUE doesn't have a winning score against all of its gauntlet opponents. SFNNUE has a winning score against Stoofvlees, an even score against Stockfish and Leela, and a losing score against AllieStein. Again, these were the objective facts when you wrote your claim, which is simply and obviously false. All of the decisive games in the gauntlet -- so far -- occurred very early on. The last decisive game in the current gauntlet started at 16:41:56 on July 12th, and ended approximately two hours later. (It's also quite silly to place so much stock in the results from a very small sample of games.)

I must admit some confusion as to why you would just post an easily verified and objective falsehood, but I suppose these are the times we're living in.

Warren D Smith

unread,
Jul 15, 2020, 5:03:38 PM7/15/20
to Nickolas, FishCooking
On 7/15/20, Nickolas <nic...@gmail.com> wrote:
> On Wednesday, July 15, 2020 at 5:06:59 AM UTC-5, F P wrote:
>>
>> https://workupload.com/file/ggEUrvNVgmH
>>
>> Use these binaries + net and say that again.
>>
>
>
> I don't have the resources to run statistically significant comparisons.
> I'm basing my opinions on results seen so far at TCEC and CCCC, which have
> both played a fair number of games with SFNNUE in various settings.


> SFNNUE's performance in the current TCEC gauntlet is impressive, but also a
> very small sample size.

--playing final TCEC game now, and if it is a draw (probably) then
SF-NNUE will finish with 29/56 with the 2nd highest percentage score,
behind only AllieStein with 7.5/14.
I agree small sample, means little, etc. In fact every game was a
draw except SFNNUE
had 3 wins over stoofvlees and 1 loss to AllieStein.

However, NNUE improves with more training hence might
surpass plain SF. And even if it does not, it might surpass it in
some circumstances
or some types of positions, in fact probably already has. If so, then
SF probably
could be improved, and it might be pretty straightforward to figure out
how to improve it.

Nickolas

unread,
Jul 15, 2020, 5:06:44 PM7/15/20
to FishCooking
On Wednesday, July 15, 2020 at 7:43:10 AM UTC-5, Warren D Smith wrote:
--my "lazy eval" idea may not work for the following reason.  SF-NNUE
is "incremental"
that is, the first layer of the neural net is updated after every move
or unmove.

Lazy eval isn't your idea, it's existed in Chess engine programming for decades. You also clearly don't understand how it works, how SFNNUE works, or how the two ideas would work in tandem.

The way you would apply lazy evaluation to SFNNUE would be to not update the entire first layer of the network with each move. Whatever computation could be done, simply don't do it all until you "know" that you need to. That's what lazy evaluation is. Allow the evaluation to be imprecise, and only refine it as necessary based on some conditions.

SFNNUE's network isn't currently structured in a way that would allow for this -- that is, such that you will get reasonable/meaningful results out of subnetworks -- but there's no reason it couldn't be.
 
The point is, you cannot turn that off via "lazy"ness.  So you are going to pay
a speed price, the move-making function is going to be slower.

You are wrong.
 
So it might be that SF-NNUE is not improvable via "lazy"ness plus the
plain-SF eval.

It's hard to understand what you even think you mean here, but SFNNUE doesn't use Stockfish's evaluation (outside of training.) That's the point, it's a drop-in replacement for the evaluation function.

Warren D Smith

unread,
Jul 15, 2020, 5:14:37 PM7/15/20
to Nickolas, FishCooking
On 7/15/20, Nickolas <nic...@gmail.com> wrote:
> On Wednesday, July 15, 2020 at 7:13:14 AM UTC-5, Warren D Smith wrote:
>>
>> --Despite its pathetic search untuned to eval,
>> SF-NNUE currently in clear FIRST PLACE with 52% score, all others
>> (SF, Lc0, Allie, Stoof) with <=50%.
>>
>> So golly gee, just imagine if you actually did tune the search.
>> Thank you ever so much for making my argument for me.
>>
>
> Even when you wrote this, it wasn't true. First, TCEC is currently running
> a gauntlet, not a tournament. That's why SFNNUE has played four times as
> many games as the other engines, and none of the other engines have games
> against each other. There are no "places" in gauntlets, and thus SFNNUE
> can't be in first place.

> I must admit some confusion as to why you would just post an easily
> verified and objective falsehood, but I suppose these are the times we're
> living in.

--I reckoned "place" based on percentage score.
Highest percentage = first place = at that time held by SF-NNUE.
Nothing false there.

So for example, Kasparov played a clocked simul versus a 4-GM
German team, and got a net plus score. He was therefore claimed by
the organizers
to have "won" and got a car as a prize. If Kasparov had gotten
a minus sore then the German GMs would have each won (cheaper) cars.
Nikolas evidently was not there to tell them they were fools and Kasparov had
"not really won" and there was "no such thing" as a victory and this
was not evan a tournament at all.

But anyhow I agree with you (see my previous email) that this whole
tourney, or gauntlet, or whatever you want to call it, was not statistically
significant proof of superiority of anybody. I still claim SF would
be wise to, e.g.
try to find out in what ways NNUE is superior to its own eval.

Kelly Kinyama

unread,
Jul 15, 2020, 8:43:50 PM7/15/20
to FishCooking
Where can I get the source so that I double the nnue elo?

Kelly Kinyama

unread,
Jul 15, 2020, 8:51:24 PM7/15/20
to FishCooking
How can I train sf nnue?


On Wednesday, 15 July 2020 13:06:59 UTC+3, F P wrote:

Warren D Smith

unread,
Jul 15, 2020, 9:02:18 PM7/15/20
to Kelly Kinyama, FishCooking
This is the stockfish-NNUE source code, although it might not
be the latest "official" source code (whatever that means):
https://github.com/joergoster/Stockfish-NNUE

Nickolas

unread,
Jul 16, 2020, 12:18:11 AM7/16/20
to FishCooking
On Wednesday, July 15, 2020 at 4:14:37 PM UTC-5, Warren D Smith wrote:

--I reckoned "place" based on percentage score.
Highest percentage = first place = at that time held by SF-NNUE.
Nothing false there.


Even at the time you made your original claim, what you wrote was simply and obviously false. You wrote "SF-NNUE currently in clear FIRST PLACE with 52% score, all others (SF, Lc0, Allie, Stoof) with <=50%."

At the time you made that claim, AllieStein had a winning score against SFNNUE, and a better winning percentage overall. After AllieStein beat SFNNUE in their third game, at no point during the entire rest of the tournament did SFNNUE's winning percentage surpass AllieStein's. AllieStein ended the tournament with a higher winning percentage than SFNNUE.

So even by your desperate, twisted, post hoc rationalizations, what you said was simply, entirely, and unequivocally false. This is a pattern with you. You often spout sensationalist bullshit contrary to objective facts, and then attempt to rationalize away your bullshit when called out on it.


So for example, Kasparov played a clocked simul versus a 4-GM
German team, and got a net plus score.  He was therefore claimed by
the organizers
to have "won" and got a car as a prize.  If Kasparov had gotten
a minus sore then the German GMs would have each won (cheaper) cars.
Nikolas evidently was not there to tell them they were fools and Kasparov had
"not really won" and there was "no such thing" as a victory and this
was not evan a tournament at all.

 
Obviously and entirely irrelevant. If the event you describe actually happened (in your case, you never know, there are decent odds you're just making this up, too), then an event was organized, conditions for winning and losing were laid out and agreed upon beforehand, and the event was run and prizes were awarded as agreed. It has literally nothing to do with a random bonus gauntlet on TCEC.

But again, even according to your desperate, twisted, post hoc rationalizations, SFNNUE didn't "win" the TCEC bonus gauntlet. The undefeated AllieStein ended the gauntlet with the highest winning percentage, and indeed gained and continuously held the highest winning percentage in the gauntlet from the 9th game through the 56th and final game.

Kelly Kinyama

unread,
Jul 16, 2020, 7:28:52 AM7/16/20
to FishCooking

Kelly Kinyama

unread,
Jul 16, 2020, 7:31:21 AM7/16/20
to FishCooking
Where is the source for these?


On Wednesday, 15 July 2020 13:06:59 UTC+3, F P wrote:

X

unread,
Jul 16, 2020, 8:27:41 AM7/16/20
to FishCooking
Where can I get the source so that I double the nnue elo?

din12...@gmail.com

unread,
Jul 16, 2020, 8:30:28 AM7/16/20
to FishCooking

Kelly Kinyama

unread,
Jul 16, 2020, 11:59:59 AM7/16/20
to FishCooking
Thank you. How do I train sf-nnue?

On Thursday, 16 July 2020 15:30:28 UTC+3, din12...@gmail.com wrote:
https://github.com/nodchip/Stockfish/tree/e29499ee4b99174570fc49ac918f1dbd5bc22660

Warren D Smith

unread,
Jul 16, 2020, 11:32:05 PM7/16/20
to Kelly Kinyama, FishCooking
glbch...@gmail.com is now claiming on the LcZero forum
that it is obvious to him (or her?) that SF-NNUE already outplays
ordinary SF in
"positional" chess due to NNUE's obviously-greater "positional understanding."

Personally, that is not obvious to me, but it seems at least possible....
I'm just pointing out that he claimed it was obvious to him. He had a
couple games
he thought backed him up in that assertion.

So anyhow, my weaker claim is merely that it is worth trying to
understand when and
in what ways SF-NNUE is superior player to SF, and the initial attempt to
understand that can be done via automation, and it could yield some low-hanging
fruit, i.e. cheap+effective ways to improve SF.

Kelly Kinyama

unread,
Jul 17, 2020, 4:55:23 AM7/17/20
to FishCooking
We have started working on sfnnue to improving the elo in lc0 and alphaZero fashion of self-play. That is why i need to know how it is trained.

Jörg Oster

unread,
Jul 17, 2020, 6:29:24 AM7/17/20
to FishCooking

Kelly Kinyama

unread,
Jul 17, 2020, 1:39:03 PM7/17/20
to FishCooking
Thank you Mr Oster. But some of us are too dull if there is no video tutorial

Andrea Manzo

unread,
Jul 17, 2020, 6:19:59 PM7/17/20
to FishCooking
I created a fork from Joergoster repository and updated to the latest stockfish patch:

https://github.com/amchess/Stockfish-NNUE/

Let me know if it's all good.
Andrea

Kelly Kinyama

unread,
Jul 18, 2020, 10:44:58 AM7/18/20
to FishCooking
I tried the nets in montecarlo tree search. They better than stockfish evaluation function because they are able to solve some  tactical positions. But they are way far from leela nets.

X

unread,
Jul 18, 2020, 2:04:49 PM7/18/20
to FishCooking
What part of "'Neural network' is an empty buzzword" don't you understand?

kellyki...@gmail.com

unread,
Jul 18, 2020, 10:20:04 PM7/18/20
to FishCooking
I am a Computer Chess Programmer. I test everything. What works, I keep, what fails I discard. Unlike you, I am not a fun of any engine. I am a fun of ideas.

garrykli...@gmail.com

unread,
Jul 19, 2020, 10:28:14 AM7/19/20
to FishCooking
exactly, who cares what person/program wins, as long as its impressive!

Warren D Smith

unread,
Jul 19, 2020, 1:51:32 PM7/19/20
to garrykli...@gmail.com, FishCooking
--SF-NNUE's neural net is a very different design than LcZero's neural net.
And plausibly for the purpose of making it fast on CPU, NNUE's design
is far superior. If you want to exploit GPU and willing to accept slow speed,
then likely not.

kellyki...@gmail.com

unread,
Jul 20, 2020, 3:17:59 AM7/20/20
to FishCooking
The problem imho, alphabeta is not not a good algorithm or trainng neural nets because train NN requires trial and error search algorithm like montecarlo tree search.
Alphabeta makes good moves thereby making learning slow. And sf never makes tacticle errors. (thats my humble opinion. 

Message has been deleted

mr.mi...@gmail.com

unread,
Jul 20, 2020, 7:44:03 AM7/20/20
to FishCooking
Training is one thing, playing is another. There are many solutions already used in nnue. 1. X random moves between ply a an b. Or random multi_pv.

F P

unread,
Jul 20, 2020, 8:03:33 AM7/20/20
to FishCooking
You can let SF do a multipv search and pick a (semi-)random move. 
Introducing blunders is easy.

Alternatively we could create a simple MCTS engine and use that to provide teacher data.

Training data generation can be done in thousands of ways.

tun nay

unread,
Jul 20, 2020, 9:16:04 AM7/20/20
to FishCooking
What about NNUE training based on Stockfish MCTS? Or Komodo MCTS?

thearbch...@yahoo.com

unread,
Jul 20, 2020, 11:58:45 AM7/20/20
to FishCooking
On Tuesday, 14 July 2020 17:14:49 UTC+2, garrykli...@gmail.com wrote:
 DArnit! Just woke up and my mind read it as 'stockfishNN vs others (TCEC 18 SCAM bonus).

thought you'd made an amazing inverted version of Annabelle Reese Bailey(arb!)


Sometimes this garrykilljoy@ is actually funny! :)







A.R.B :)
Message has been deleted

kellyki...@gmail.com

unread,
Jul 21, 2020, 1:50:47 AM7/21/20
to FishCooking
If you limit stockfish nnue to depth 3 or less, it produces bad moves  of about 1400 elo, If you limit leela to same depth it plays like a FIDE master. Thats my contention with stockfish nnue nets.

mr.mi...@gmail.com

unread,
Jul 21, 2020, 3:13:50 AM7/21/20
to FishCooking
That is also stockfish low depth pruning related. Compare none nnue sf and do the same experiment

kellyki...@gmail.com

unread,
Jul 21, 2020, 2:29:39 PM7/21/20
to FishCooking
I have two branches running montecarlo tree search that looks at ply 1 look-ahead. sf nnue  as black plays the opening correctly to 1. e4 .... with 1... e5 2. Nf3 Nc6. but blunders after the pieces interlock. But regular sf blunders to 1.e4 with 1... d5.  2. exd5 Qxd5. 3. c4 Qe6+ and continues making useless checks with a queen. SFNNUE never does this! I will upload the branches soon. stockfish MCTS is already online.
Reply all
Reply to author
Forward
0 new messages