I agree we need nomenclature and descriptions to identify the differences between a ”traditional” engine, say SF, and this new breed of engines popularised by AZ and LC0. I have had some exposure to both concepts, but not studied SF or LC0 code so I’m going to try and hope it’s not a complete disaster :-)
First: One (the) major difference is that SF is ”pre-programmed” using many, many hours of human tweaking of a lot of move ordering and board evaluation functions while LZ0 learns those concepts by self play (using even more hours?). You could call this the engine creation phase (or something more descriptive - I am at a loss right now). For SF it ends up in a tight and fast C program that a number of chess programmers fully understands, for LZ0 it ends up in a huge number of network weights that nobody knows what they really mean. Compare this to the visual cortex V1, V2 … everyone knows they work a charm but nobody can explain really how and why … Below I’m only concerned with the engine operations once created ...
Both SF and LZ0 attempts to select the ”best” move at a root node (roughly current board state) by expanding that node into valid child nodes (board states after moves), then selecting one child node (by some algorithm) and trying to determine what’s the opponents best move is from this new position, and so on. The goal for both engines is to find a line of play that is ”God-like” (either by divine knowledge or brute force) and then make the move that takes the first step into this line of play. I’m aware this is painfully obvious for everyone but I want to make the point that both engine types performs a _search_ (in a vast search space - the tree of all possible moves) to identify (by good approximation) this best line of play.
There are two main conceptual differences in SF and LC0. The first is in the search algoritm used and the second one is in how the information which guides the search is determined:
SF uses alpha-beta search (with zillions of enhancements) which is heavily dependent on the accuracy of the eval() function - handcrafted by chess experts. Move ordering is also of utmost importance to avoid searching (potentially) fruitless branches, so human knowledge together with clever techniques are employed also there. The concept of ”depth” is easily understood as this is the numbers of moves (half-moves) before the algoritm ends a line (depth-first) and starts over with the next move at the top level. If this depth happens to coincide with an exchange, check etc. a Qsearch is used to extend the depth until a stable evaluation can be fed up the tree. So, in summary: the algorithm performs a depth-first search while pruning parts of the tree that cannot improve the given players position (given by the eval() function).
LC0 instead uses Monte Carlo Tree Search (MCTS). To backup a little: A Monte Carlo method, in its pure form, would expand the root node using valid moves (i.e. create child nodes) and then perform many (say 10M) rollouts from each one using random (but valid) moves until a win, draw or loss is obtained. Say each rollout consists of an average of 60 moves, that is 600M moves per child node! The ”value” of each of those child nodes would be the average score over those 10M rollouts. The child with the highest ”value” (average score) is then chosen. Anyone aware of the immense state space of a chess game realises that this still only samples a minuscule part of the full tree (so pretty useless). Still, rollouts using good chess engines with a slightly randomised move ordering / eval() are useful game analysis tools. Anyways, the MCTS improves on this by implementing a tree search algorithm and balancing the tree between exploitation (digging deeper) and exploration (look at new lines). This balance is achieved by the UCB formula which selects the next node to explore. The random rollouts and averaging of scores are, however, still present in the algorithm. Although an MCTS implementation (such as LZ0) understands what depth it is currently at, it makes no sense at all to say that LZ0 searches to depth 32 for example. MCTS traverses a search space tree in a highly asymmetric fashion and is not at all guided by any depth limit (only ”Interesting, I’ll look deeper” or ”Hey, haven’t seen this, I’ll check it out”). You may have noticed that so far no move ordering or eval() function is needed, and this is true but also the reason Dietrich suggested I remove the text ”Monte Carlo” from my previous answer :-) since LZ0 does not use random rollouts as the MCTS calls for. And this is where the NN (the deeeep one) enters the picture. Given the current board state (and 8 previous ones I believe), it feeds this data through a number of convolutional, ReLu … etc. layers until it produces 1) a policy vector which tweaks the UCB formula and therefore guides the MCTS in selecting the next node to explore, I’d say it is somewhat akin to the SF move ordering logic and 2) a value scalar that estimates the node ”value” that would have been obtained _if_ infinite bona fide rollouts had been performed, I'd say it somewhat akin to the SF eval() function … if one feels the need for a simple comparison.
The MCTS algorithm driven by the NN (policy/value) thus makes for really exciting studies! What parts of the NN lights up (high weights) at different tactical situations? I guess someone could feed LZ0 any chess problem known, look at how the network layers lights up, find interesting feature extractions and, ultimately, learn humans how to play better chess. Some day ...
So, to the OP (great) question I would say: No (and she doesn't need to), because any quiescent state is (somewhere) encoded as a feature in the NN which impacts how the policy vector and value scalar are provided to the MCTS. How ”well" it is encoded and how much it directs the search is probably impossible to answer - some research in this would be very interesting!
… and much kudos to all you guys making this possible! Anyone interested in NN’s, AI and/or chess must find this exciting times! And to kingschrusher who produces these interesting LC0 analysis videos!
PS. When I write ”move”, I really mean a half-move, but I know that you know ...