Ok I have gathered all the information about net models I use for lc0 in order to play against a human-like opponent.
With the help of Gemini I can now present the ultimate guide for human-like lc0 nets :
Hey Picochess crew,
get ready to completely transform your chess experience! While our recent discussions about complex PyTorch models were fascinating, I’m thrilled to introduce a far simpler, more performant, and, frankly, more fun path to creating incredible chess opponents for PicoChess.
We're talking about a new generation of Lc0 neural networks specifically designed to play with human-like style, character, and even fallibility.
The best part? The setup is a dream:
This guide will be your deep dive into this amazing world. Let's get started!
1. The Philosophy: Why Weaker, Human-Like Nets are a Game-Changer
The goal here isn't to build the strongest engine possible—it's to build the most interesting one. Instead of getting steamrolled by a flawless 3500 Elo demigod, you get a sparring partner that feels human, is beatable (but still a challenge!), and is perfect for our Picochess setups.
2. The Tech Evolution: Lc0's Shift from ResNet to Transformers
It's helpful to know the two main types of Lc0 architecture:
Part I: The Pioneers – The Maia Project (ResNet-Based)
Maia was a revolutionary project that trained an AI only on human games from specific rating levels. The key to authentic Maia play is to use a small, fixed node search (e.g., 2-12 nodes). This preserves its human-trained character while allowing it to avoid simple one-move blunders.
Maia Net Target Elo Recommended Nodes
maia-1100 ~1100 1–2
maia-1300 ~1300 1–2
maia-1500 ~1500 1–4
maia-1700 ~1700 1–6
maia-1900 ~1900 2–8
maia-2200 ~2200 4-10
maia-2500 ~2500 6-12
I have added the 2200 and 2500 nets to my maia configuration:
maia.uci
[DEFAULT]
Hash = 192
Threads = 1
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1100.pb.gz
;SyzygyPath = /opt/picochess/tablebases/syzygy
[Elo@1100]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1100.pb.gz
[Elo@1200]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1200.pb.gz
[Elo@1300]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1300.pb.gz
[Elo@1400]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1400.pb.gz
[Elo@1500]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1500.pb.gz
[Elo@1600]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1600.pb.gz
[Elo@1700]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1700.pb.gz
[Elo@1800]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1800.pb.gz
Threads = 2
[Elo@1900]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-1900.pb.gz
Threads = 2
[Elo@2200]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-2200.pb.gz
Threads = 2
[BigMaia@2500]
WeightsFile = /opt/picochess/engines/aarch64/maia_weights/maia-2500.pb.gz
Threads = 2
Part II: The Personalities – Detlef Kappe's "Gyal" Family (ResNet-Based)
This is where the magic truly happens. Detlef Kappe created a legendary series of nets, each with a distinct and vibrant personality, often by blending human Lichess data with Stockfish analysis.
Master Download Link for Kappe's Nets: https://github.com/dkappe/leela-chess-weights/wiki/Bad-Gyal
Meet the "Gyal" Family:
Part III: The Future is Now – Fine-Tuned Transformer Nets
These nets are created by taking a god-tier Lc0 Transformer and fine-tuning it on hundreds of thousands of high-level human games. This shapes its style to be more human while retaining immense power.
Origins:
The Ultimate Strength Guide: Nodes vs. FIDE Elo
This amazing table, based on data from Kronos, is your cheat sheet. Crucial Rule: maia3 needs roughly double the nodes of classic2 for the same strength.
Target-Elo Elite v2 Nodes/Move classic2 Nodes/Move maia3 (Est. Nodes)
~400 Policy/1 - -
~1500 8 - -
~1600 13 - -
~1700 20 - -
~1800 31 - -
~1900 48 - -
~2000 - 1 2
~2100 - 5 10
~2200 - 10 20
~2300 - 15 30
~2400 - 21 42
~2500 - 28 56
~2600 - 41 82
~2700 - 63 126
~2800 - 103 206
~2900 - 180 360
~3000 - 336 672
~3100 - 685 1370
~3200 - 1600 3200
Downloads for the Transformer Gods:
Bei aware of the fact that not all nodes numbers can be achieved by all Raspberry Pis (but no worries - thats might be even better for a more human like playing style ;-)
About setting the node limit
You have two options:
Elo-Level samples for explicit nodes settings in the uci file for lc0:
[Elite@400]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/elite_v2.pb.gz
Threads = 2
PicoNode = 1
[Elite@1500]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/elite_v2.pb.gz
Threads = 2
PicoNode = 8
This is how my current lc0 uci file looks like (I have not done this yet for my lc0 entries as I use the manual method)
[DEFAULT]
Hash = 192
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/792013-192x15.txt
##WeightsFile = /opt/picochess/engines/armv7l/lc0_weights/t1-256x10-distilled-swa-2432500.pb.gz
##WeightsFile = /opt/picochess/engines/armv7l/lc0_weights/128x10-t60-2-2990.txt
SyzygyPath = /opt/picochess/tablebases/syzygy
Backend = blas
MinibatchSize = 16
CPuct = 1.745000
MaxPrefetch = 0
SmartPruningFactor = 5
MaxCollisionVisits = 1
[01-Mean Girl]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/meangirl-8.pb.gz
Threads = 1
[02-Bad Gyal]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/badgyal-8.pb.gz
Threads = 1
[03-Good Gyal]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/goodgyal-7.pb.gz
Threads = 1
[04-Evil Gyal]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/evilgyal-6.pb.gz
Threads = 1
[05-Tiny Gyal]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/tinygyal-8.pb.gz
Threads = 1
[06-classic1]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/classic1.pb.gz
Threads = 1
[07-classic2]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/classic2.pb.gz
Threads = 1
[08-maia3]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/maia3.pb.gz
Threads = 2
[09-elitev2]
WeightsFile = /opt/picochess/engines/aarch64/lc0_weights/elite_v2.pb.gz
Threads = 2
[10-Std1Core]
Threads = 1
[11-Std2Cores]
Threads = 2
[12-Std3Cores]
Threads = 3
[13-Std4Cores]
Threads = 4
So, fire up your Pi, download a few nets, and start experimenting. Begin with a low node count, see how long your Pi takes to make a move, and adjust to find the perfect balance of strength and speed for your enjoyment.
Happy playing!
Dirk
--
You received this message because you are subscribed to the Google Groups "PicoChess" group.
To unsubscribe from this group and stop receiving emails from it, send an email to picochess+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/picochess/84cba7fd-0ec6-4701-8cb3-d8fb3bb201ccn%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "PicoChess" group.
To unsubscribe from this group and stop receiving emails from it, send an email to picochess+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/picochess/58dafdf1-6196-4ef1-bda2-3c198e44969fn%40googlegroups.com.
This link was missing - thanks Luigi!
Dirk
Hi Thomas,
no wories: you don’t need to understand all the deep leaning neural network nitty-gritty to enjoy the cake. Hopefully some of these „human“ neural network opponents will find their way into the images provided by Randy.
I have created an overview (for me) because of the whole complex deep learning fuss, maybe some of you find it helpful.
Dirk
A Grandmaster's Guide to AI Brains: Understanding the New Wave of Chess Engines
You've probably noticed a zoo of new, weirdly-named chess engines popping up lately: Maia, Maia2, Meangirl, Elite, Chess-Transformers... What's the deal? Aren't they all just „engines“?
Well, not quite. We're moving beyond the age of engines that just calculate billions of moves. The new kids on the block use "deep learning," and they learn and "think" about chess in fundamentally different ways. Some try to be gods, others try to be... well, us!
Let's break down the key differences. Think of this as getting to know the personalities of your new digital opponents.
1. The Brain's Blueprint: Engine Architectures
The "architecture" is the fundamental design of the neural network. Just like a car can have a V8 engine or an electric motor, AIs have different brain structures for different tasks.
a) The Classic: ResNet (The Image Recognizer)
Think of a Residual Network (ResNet) as an AI that sees the chessboard as a picture. It’s incredibly good at recognizing static patterns in that picture: "Aha, this pawn structure looks good," or "That knight on f5 is a monster."
b) The New Challenger: Transformers (The Language Master)
This is the same architecture that powers models like ChatGPT. A Transformer doesn't see the board as a static image; it reads the entire game as a language. The input isn't just the board position, but the sequence of moves that got there (e.g., "1. e4 e5 2. Nf3 Nc6...").
2. The School of Chess: How These Engines Learn
An AI is only as good as its teachers and its textbooks. This is where the philosophies diverge dramatically.
a) The "Zero" Method: Learning From the Void (Leela Chess Zero)
This is the famous AlphaZero approach. You give the AI a completely blank slate—it only knows the rules of chess. Then, you make it play millions of games against itself.
b) The Human-Trained Method: Learning From Us! (Maia, Kappe's Nets, Transformers)
These engines are designed to play more like humans. To do that, they study... well, humans!
3. How They "Think": Search Methods
A neural net provides the "intuition," but an engine still needs a way to "think" ahead and calculate variations.
a) Traditional Search & NNUE (The Deep Calculator)
Traditional engines like Stockfish use a brute-force (but very clever) search method called Alpha-Beta Pruning to explore millions of possible future positions. For decades, they evaluated these positions using hand-crafted rules (e.g., "a queen is worth 9 points," "rooks on open files are good").
The modern twist is NNUE (Efficiently Updatable Neural Network). This replaces the old hand-crafted rules with a small, super-fast neural network that runs on the CPU. So, Stockfish still uses its powerful Alpha-Beta search, but the final "is this position good or bad?" question is answered by a mini-AI.
b) Monte Carlo Tree Search - MCTS (The Smart Scout)
This is what Lc0 uses. MCTS doesn't try to look at every branch of the game tree. Instead, it acts like a team of scouts.
Why use MCTS for these big networks? Because it pairs perfectly with the network's "intuition." The network intelligently guides the search, focusing the engine's power only on the most promising variations instead of wasting time on obviously bad moves.
4. Taming the Beast: Controlling Strength and Personality
A full-power Lc0 is a god. But what if you want a fun, beatable, or "stylish" opponent? You need to put a leash on it.
The result? An engine that has the superhuman intuition of Lc0 but lacks the deep tactical calculation to be perfect. It might play a beautiful, strategic game but can sometimes miss a deep tactic. This is exactly how Kappe's nets, Maia 3, Elite, etc., are configured to create distinct and human-like playing styles!
Summary Table: The AI Chess World at a Glance
🧠 1. Traditional Engines (e.g., Stockfish 17)
🤖 2. "Zero" Method Engines (e.g., Lc0)
🧍♂️ 3. "Human-Trained" Models (e.g., Maia, Kappe, Transformer-based bots)
Hope this helps demystify these amazing new engines! It's an exciting time to be a chess fan.
Happy chess playing!