Hi. Maybe it's a newbie question, but since the ladders are part of the well defined topology of the goban (as well as the number of current liberties of each chain of stone), can't feeding those values to the networks (from the very start of the self teaching course) help with large shichos and sekis?
Regards,
Claude
This kind of joseki is not good for Zero type. Ladder and capturing
race are intricately combined. In AlphaGo(both version of AlphaGoZero
and Master) published self-matches, this joseki is rare.
-------------------------------------------------------------
I found this joseki in kata1_b40s575v100 (black) vs LZ_286_e6e2_p400 (white).
http://www.yss-aya.com/cgos/viewer.cgi?19x19/SGF/2021/01/22/733340.sgf
> a very large sampling of positions from a wide range
> of human professional games, from say, move 20, and have bots play starting
> from these sampled positions, in pairs once with each color.
This sounds interesting.
I will think about another CGOS that handle this.
Each convolutional layer should spread the information across the board.
I think alpha zero used 20 layers? So even 3x3 filters would tell you
about the whole board - though the signal from the opposite corner of
the board might end up a bit weak.
I think we can assume it is doing that successfully, because otherwise
we'd hear about it losing lots of games in ladders.
> something the first version of AlphaGo did (before they tried to make it
> "zero") and something that many other bots do as well. But Leela Zero and
> ELF do not do this, because of attempting to remain "zero", ...
I know that zero-ness was very important to DeepMind, but I thought the
open source dedicated go bots that have copied it did so because AlphaGo
Zero was stronger than AlphaGo Master after 21-40 days of training.
I.e. in the rarefied atmosphere of super-human play that starter package
of human expert knowledge was considered a weight around its neck.