--
You received this message because you are subscribed to the Google Groups "SHOGI-L" group.
To unsubscribe from this group and stop receiving emails from it, send an email to shogi-l+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/shogi-l/ba4ffeb6-afa8-4452-9e82-6c1610837847%40googlegroups.com.
Hi. ☺️There is AobaZero for shogi. https://github.com/kobanium/aobazero/blob/release/README_en.md
On Thu, Jun 4, 2020, 19:14 BCM <shog...@arcor.de> wrote:
Hi,--
There are:
- Leela Zero (game Go)
- Leela Chess Zero
- ChineseChess-AlphaZero
all are free, open-source, and neural network-based, able using CPUs (no need for GPUs anymore).
Isn't there any similar for shogi?
BCM
You received this message because you are subscribed to the Google Groups "SHOGI-L" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sho...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to shogi-l+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/shogi-l/ffd1a27c-6cf2-47c0-8ce1-91d73d22fdf4o%40googlegroups.com.
Problem is that these 'Zero' engines are not really designed at all. They
just learned to play the game themselves, and even the programmers have no
clue as how they eventually manage to do it. It is not only that they do
not teach humans to play better Shogi, they also do not teach programmers
how to make better Shogi engines...
I would also be interested in seeing the Zero-methodology applied to another national chess variant: Thai Makruk. It has more strict endgame counting rules, which if encoded correctly might overcome the stylistic complaint of the NN-based chess engines, that they play too lax when a) far ahead and b) very even due to the one-size-fits-all 50-move rule.
> Did the AI also learned e.g. Anaguma castle by itself or didn't play it at all?
From what I remember, it did not discover the traditional castles like Anaguma, Yagura, and Mino.> Or how about ranging-/static-rook usage during learning phase?The papers I saw did have interesting graphs in that area, and showed a vast preponderance towards double static rook. I wonder if that has influenced Sota Fujii's choice of opening.On Monday, June 8, 2020 at 7:55:13 AM UTC-7 BCM wrote:This 'Zero' engines are not good for teaching, but still we can have some benefit of them and also get interesting things.
1) for example read this: https://arxiv.org/pdf/1712.01815.pdf (page 6)
it looks like this AI is simulating all the centuries of human doing (in a few hours).
e.g. the Caro-Kann opening: after 2 hours learning, the AI decide it's a good opening and used it frequently. But after 6 hours learning, it decides no longer use it.
Maybe a chess-player who have a good and comfrtable history chess-db is reading this: it would be very interesting (for me) to have this 12 graphics also for human-only-games.
So e.g., how often Caro-Kann was played for each year in history by humans?
Did this AI and H(uman)I have the same development?
And what about shogi?
Such learning-development would be also very interesting there.
Did the AI also learned e.g. Anaguma castle by itself or didn't play it at all?
Or how about ranging-/static-rook usage during learning phase?
Following the change of playing-style would be very interesting
2) see the book "Rethinking Opening Strategy: Alphago's Impact on Pro Play" by Yuan Zhou
Indeed human can learn from it "The AI program AlphaGo Zero has introduced several new ideas about how to play ..."
And how about 8th move in this chess game https://www.youtube.com/watch?v=A-vNq61KfLs?
Any similar in shogi?
--
You received this message because you are subscribed to the Google Groups "SHOGI-L" group.
To unsubscribe from this group and stop receiving emails from it, send an email to shogi-l+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/shogi-l/f69224e0-26f1-4a1e-a548-f6b5a37aff4bn%40googlegroups.com.