I am planning to write an Othello game with average AI. Can anyone explain
to me how the stability of a given board state is calculated?
Thanks
-dAniEL
I don't know how to calculate *precise* number of stable discs on board.
But, I can tell how modern computer programs try to overcome the problem
of calculating stability (and mobility etc.).
Evaluation function in modern Othello programs is based on patterns.
Pattern is a set of fields on board and contents of a pattern
are called configuration.
An example of a pattern and associated configuration is
- [a1,a2,a3,a4,b1,b2,b3,c1,c2,d1]
- [empty,empty,w,b,w,b,w,w,w,b]
To evaluate score of a given position you take a sum of precomputed
values of configurations present on board. The only problem now is
how to precompute the values. The values are usually based on
a big set of well-played games - the values can be based on statistics
(eg. what is an average endgame-score if configuration x is present).
Gamesets can be downloaded from:
- http://www.cs.ualberta.ca/~mburo/log.html
- http://www.andreazinno.it/prog.html
Better description of a pattern-based eval-function can be found in
InCollection{buro99experiments,
author = "Michael Buro",
title = "Experiments with {Multi-ProbCut}
and a new high-quality evaluation function
for {Othello}",
booktitle = "Games in {AI} Research",
address = "Maastricht, The Netherlands",
editor = "H. J. van den Herik and H. Iida",
year = "1999",
url = "citeseer.nj.nec.com/buro97experiments.html"
PS. If anyone knows how to compute *precise* number of stable
discs, I'll be eager to hear the algorithm.
--
Lukasz Anforowicz | I used to think that the brain was the
mailto:luk...@mimuw.edu.pl | most wonderful organ in my body. Then
| I realized who was telling me this.
| -- Emo Phillips
there are a number of ways: this is the way i did it, and it is
precise, and reasonably fast:
first of all, you notice that you cant have ANY stable pieces before
you get a corner. next, you notice, that once a stable piece, ALWAYS a
stable piece, so you store this in an a global array. if the liberty
of a square is more than 4, it CANY be stable. next comes a recursive
algorithm: if the number of stable discs around a disc is 4, then it
is also stable. you mark the ones you checked, so that you dont go
into an infinite loop. (this is that best way i have found)
another way to do it is to step along the side from the corner, but i
have found this to be slighltly flawed, and even if i am wrong, quite
difficult. this algorithm (the first one) doesnt waste TOO much time
and works well.
have fun
--sam
Let's consider a upper edge of a board (row 1).
X - black disc
o - white disc
White to move:
a b c d e f g h
1 X o o X o X - -
2 X X X X X X X X
3 X X X X X X X X
4 X X X X X X X X
5 X X X X X X X X
6 X X X X X X X X
7 X X X X X X X X
8 X X X X X X X X
White disc on e1 is *unstable* (even though discs d1,d2,e2,f3 are stable).
Isn't your alghorithm going to tell, that e1 is stable?
regards
ego -= small_sum.
--sam
If you're already invested in pattern-lookups, calculating stability
just requires checking the stability for the square in the 4 vectors
that pass through that square.... so not terribly complicated
to implement.
But I seem to recall seeing some papers that doubted the value of
calculating stability at all, as it typically is a very small
part of the evaluation function's value until the very end of the
game, when the fast endgame searchers are starting to kick in anyway.
>pieces would be in jeopordy. my algorithm never overcounts (which is
>MUCH worse).
I'm curious--why is this much worse?
(I seem to recall non-conservative estimates of mobility demonstrating good
bang-for-the-buck in early feature-based evaluation functions It isn't
self-evident to me that an overestimating stability metric is any worse...)
Matt
Stability can be quite useful in terminating a wld-search before
filling the board. Normally you determine game outcome (win/loss)
when the board is full, but the game outcome (win/loss) can also be
determined if one of the players has got more than 32 stable discs.
Could you please provide further details on the pattern approach?
Maybe you can point to some articles on the web? Thanks in
advance? :)
I developed high-quality othello program (Virtual Othello) which is
competitive with the best othello programs in the world and therefore I'd
like to share my experience in that field. I can say that there are no
high-quality othello program which uses stability in their evaluation
function, because it is hard to compute this feature and there is no need to
compute it, becuase you don't have to abort WLD (Win\Loss\Draw) search
earlier if the position have more than 32 stable disks, because even exact
score search solves such positions for a second even if we have 34 empties
(when you can't have more than 29 stable disks or game is over) - look at
Zebra results on the FFO Endgame Tests.
I did implement stability feature in the earlier versions of my
evaluation function, but then i read a very, very useful article by Michael
Buro "From Simple Features to Sophisticated Evaluation Functions"
http://www.cs.ualberta.ca/~mburo/ps/glem.ps.gz where he described an
interesting approach of how to build evaluation function and describes
evaluation function which is used in Michael's program Logistello - the
first ever computer program who beat human world champion Takeshi Murakami
in the 6-game match with the score 6:0.
So I think you should read this article and some other articles : about
alpha-beta pruning, NegaScout\PVS, transposition tables, MPC and EndCut,
move ordering heuristics (like fastest-first, history heuristics, maybe
killer tables and parity regions, etc.), aspiration search and then ask
yourself if you need a stability feature.
Good luck,
Dmitriy Iassenev
P.S. For viewing an article being mentioned you must have a viewer of ".ps"
files. I recommend you to use GhostScript with Alladin
"Lukasz Anforowicz" <luk...@absurd.mimuw.edu.pl> wrote in message
news:ap35jh$l5g$1...@kenny.mimuw.edu.pl...
It took 2 hours 48 minutes on a very fast cpu,
for Zebra to solve 20 positions from FFO test-suit.
That's more than a "second" per test. :-)
You write, that endgame search is already fast,
*but* perhaps you could make the endgame search *even*
*faster* if you could terminate the search when number
of stable discs of one player is greater than 32.
> So I think you should read this article and some other articles : about
> alpha-beta pruning, NegaScout\PVS, transposition tables, MPC and EndCut,
> move ordering heuristics (like fastest-first, history heuristics, maybe
> killer tables and parity regions, etc.), aspiration search and then ask
> yourself if you need a stability feature.
Ok :-)
Hmpf... many programs seems to be using "fastest-first" search
in their endgame solvers :-) Could you tell us something about
this algorithm? Any papers about it?
Does this approach orders the children nodes according to their
branching factor? (child with least possible moves is considered
first?)
> Hmpf... many programs seems to be using "fastest-first" search
> in their endgame solvers :-) Could you tell us something about
> this algorithm? Any papers about it?
> Does this approach orders the children nodes according to their
> branching factor? (child with least possible moves is considered
> first?)
yes, right you are. BTW even in the midgame fastest-first is a very
efficient way to order moves (I tried to use that ordering heuristics and
got intersting results (still didn't optimize it); I know that French
program EDAX uses it).
Sorry, I missed the initial message, are you developing your own Othello
program?
Good luck,
Dmitriy Iassenev
btw, how do you compute the stability disks in the position? AFAIC, there is
the only algorithm to compute stability disk count exactly in any position
and it is brute-force search, all the others compute just the subset of the
stable disks and therefore thre is no sense to use them.
Good luck,
Dmitriy Iassenev
> Sorry, I missed the initial message, are you developing your own Othello
> program?
I am trying to solve Othello endgames using extended
proof-number search. This will probably be my master
thesis at Warsaw University. Since I am concentrating
my efforts on *endgame* solving, I think, that I won't
end up with a program able to play tournament games.
Currently my program reaches WinZebra level in terms
of number of nodes visited. Some positions are solved
using 10 times less nodes :-) Of course there are also
positions where I need 10 times more nodes :-)
I see three ways to change (and hopefully improve) pn-search behaviour:
1. different initialisation of pn/dn numbers
2. different propagation of pn/dn numbers
3. faster termination of the search
Ad1.
It is possible to quite accurately approximate number
of nodes needed to solve a position - one can initialize
pn/dn using this approximation. The problem is how
to approximate. One way to do it is to use something
like (avg_branching^empties) - avg_branching=2 seems
to work quite reasonable. But this is a very rough
estimate - learning from past experiences can probably
make these approximations much better.
And one more final note - when using extensions described
above one greatly reduces memory usage.
Ad2.
It is possible to involve evaluation function into the
process of propagating pn/dn numbers. I haven't perform
accurate experiments to measure this improvement quality,
but program using this propagation method seems to visit
4 times less nodes than a program with normal propagation.
Ad3.
Currently I have no good ideas on that one :-/
I just terminate the search one level faster if
the discs difference is more than 37 (IMO this is
the most one can gain in *one* move).
One can also use transpositions with pn-search,
but IMHO reduction in number of nodes visited
is rather small compared to speed penalty due
to more complex tree maintenance.
> Good luck,
Thanks :-) I've got two more years to finish my research :-)
Hello
>
> btw, how do you compute the stability disks in the position? AFAIC, there is
> the only algorithm to compute stability disk count exactly in any position
> and it is brute-force search, all the others compute just the subset of the
> stable disks and therefore thre is no sense to use them.
I don't compute stability in the current version of my program,
because of what you said - I don't know any reasonably efficient
algorithm that would be able to compute precise stability. :-)
Is there a paper/article/whatever that backs up your statement
that there is no stability-calculation-algorithm other than
brute-force search?
> I don't compute stability in the current version of my program,
> because of what you said - I don't know any reasonably efficient
> algorithm that would be able to compute precise stability. :-)
>
> Is there a paper/article/whatever that backs up your statement
> that there is no stability-calculation-algorithm other than
> brute-force search?
I don't know such a paper, and actually i don't think it is possible to
compute stability faster. I have a discussion with Richard Delorme (author
of Edax) and Gunnar Andersson (author of Zebra) and they don't know if it
exists. If you mentioned, there is an option in Zebra which shows stable
disks, but it doesn't guarantee that it'll show all the stable disks.
How is your program called and is it downloable?
Good luck,
Dmitriy Iassenev
I also think, that brute-force search is the only way to compute
precise stability. Yet, you can never be 100% sure :-)
> of Edax) and Gunnar Andersson (author of Zebra) and they don't know if it
> exists. If you mentioned, there is an option in Zebra which shows stable
> disks, but it doesn't guarantee that it'll show all the stable disks.
>
> How is your program called and is it downloable?
Good question :-)
Currently the program has no name.
I'll have to think of a name for it :-)
If my program will perform reasonably well, I'll make it available
for download (probably on GNU or BSD licence). I'll post here
when the program is ready.
regards
:-)
> > How is your program called and is it downloable?
>
> Good question :-)
> Currently the program has no name.
> I'll have to think of a name for it :-)
>
> If my program will perform reasonably well, I'll make it available
> for download (probably on GNU or BSD licence). I'll post here
> when the program is ready.
you can try Virtual Othello 1.6 with stability feature being used in the
evaluation function
http://www.bitstrike.com/files/vo.zip
and Virtual Othello 2.0 without MPC and with the light endgame solver
http://www.bitstrike.com/files/vo20.zip)
Good luck,
Dmitriy Iassenev
P.S. You should visit GGS server where you can find the best Othello
programs.
How strong is it now and what algorithms\heuristics do you use?
Good luck,
Dmitriy Iassenev
Currently I have:
- quite fast board representation (8 rows = 8 uint16_t)
- evaluation function based on patterns
(it works quite well, but I am not able to reproduce Michael Buro's
results of nearly getting rid of error on learning positions)
- simple alpha-beta
(yes! even no transposition tables and no move ordering)
- proof-number search with transposition tables and some other
extensions
- I have nothing that would deal with opening book
(that's probably because I do not fully understand ideas
about opening book learning, that are mentioned in Michael
Buro's papers)
The program easily beats me :-) It is also able to win on ggs
with ant, but I guess these are not very big achievements, are they? ;-)
(the program doesn't speak with ggs yet; I just typed the moves back
and forth between ant and my program)
So as you can see, there is a lot of work to be done...
I think I'll have a decent endgame search engine in a few
months - I'll at least publish the endgame solver then.
How did you efficiently implemented it? I tried to use a board like two
64-bit integers but get slower results, than 91-byte board representation
*********
*________
*________
*________
*________
*________
*________
*________
*________
**********
(I don't have to check if the indexes are out of range)
> - evaluation function based on patterns
> (it works quite well, but I am not able to reproduce Michael Buro's
> results of nearly getting rid of error on learning positions)
Why? I did it. All you need you have to code gradient descent procedure,
have good training set (it is available in the internet) and to care about
symmetries (and maybe inversions, but I don't care about them because of
some reasons). BTW in Virtual Othello (furthemore VO or vo) I implemented
FPIT - fast pattern index transformation where I change pattern indexes
during disk flipping, it works 25-30%(!) faster than previous optimized code
> - simple alpha-beta
> (yes! even no transposition tables and no move ordering)
that's bad, try it cause it speeds up very well.
How big is nps (nodes per second) in your program? Vo reaches 1.65 mnps on
pIII 1.0ghz
> - proof-number search with transposition tables and some other
> extensions
I read about that search, but actually don't remember the idea. Can you
explain it?
> - I have nothing that would deal with opening book
> (that's probably because I do not fully understand ideas
> about opening book learning, that are mentioned in Michael
> Buro's papers)
vo doesn't use opening book too :-(
>
> The program easily beats me :-) It is also able to win on ggs
> with ant, but I guess these are not very big achievements, are they? ;-)
> (the program doesn't speak with ggs yet; I just typed the moves back
> and forth between ant and my program)
What is your (program) nick?
try ODK (Othello Developemtn Kit) by Chris Welty, it is very simple to use
it
> So as you can see, there is a lot of work to be done...
> I think I'll have a decent endgame search engine in a few
> months - I'll at least publish the endgame solver then.
It would be very interesting :-)
Good luck,
Dmitriy Iassenev
Have you ever read about BPIP - Best Play for Imperfect Player? Quite
interesting approach...
Good luck,
Dmitriy Iassenev
I'm curious how the branching factor would relate to move ordering.
(a shallow-search for pre-ordering does make sense, but it isnt' clear
to me that the branching factor would relate to the likelihood of
finding the value that will generate the most prunes)
(I haven't been up on the literature in a while--have there been some
interesting demonstrations along these lines?)
Matt
Alpha-beta pruning can double searching depth (divide branching factor by 2)
only in the case if all the moves are sorted perfectly (i.e. you always know
what move is the best). If you find the best move early you can prune the
variation earlier, therefore you don't need to search dummy positions. Move
ordering is crucial if you use alpha-beta.
Good luck,
Dmitriy Iassenev
Sure.
I was wondering why the original poster had suggested using the branching
factor for a child as a basis for determining move-ordering.
(I have seen shallow searches and killer tables as common heuristics to
improve the pruning efficiency)
Matt
Well, if you mean fastest-first - it is not a basis, it is context-specific
heuristics which orders moves good in Othello where mobility feature plays
one of the main roles. Effieciency of its usage is reinforced by practice,
as all of the top endgame solvers use it, and some programs use it even in
the midgame. Branching factor for a child highly correlates with the
branching factor for a parent in Othello, therefore usage of fastest-first
is crucial.
BTW killer tables doesn't play a role in Othello.
Good luck,
Dmitriy Iassenev
Interesting. (I would have expected there to be an even/odd relationship
instead, as one side's mobility can often coincide with the opponent's
immobility)
>
>BTW killer tables doesn't play a role in Othello.
>
I forget the exact terminology--isn't this what they were called in Keyano?
Matt
Let's say you've got 6 children to select from:
child: | branching factor: | win/loss for the player to move:
-----------------------------------------------------------------
1 | 9 | L
2 | 9 | L
3 | 2 | L
4 | irrelevant | W
5 | irrelevant | W
6 | irrelevant | W
Selecting child 1, 2 or 3 will result in an alpha-beta cutoff.
*But* you can probably calculate value of child3 (loss) much faster
than values of children 1 and 2 (because of difference in branching
factor). Therefore it might be better to first check child no 1.
I read a paper about Keyano, but don't remember what they called killer
tables. I mean that these tables gather information about what moves made
cut-offs and then you use this information in order to try the move which
made more cut-offs than the others
Good luck,
Dmitriy Iassenev
Indeed, that's a very interesting document. Thank you for giving
reference to this paper. The ideas presented in the document are very
appealing.
However, in endgames the evaluation function is precise and minimaxing
*is* the correct way of calculating root's value. So, while bpip
seems to be very useful in midgame-play, it is inappropriate for
endgames, which are my main interest.
yes, right you are, but it maybe used for selective endgame search. Do you
use it? All the best endgame solvers presort moves via transpositions table
running before solving selective endgame search - a variation of the MPC.
Good luck,
Dmitriy Iassenev.
I haven't performed acutal tests :-/
So my statements about efficiency may be wrong indeed.
Reasoning behind using bit-board-representation is, that
one can use table-lookups for evaluating
- disc difference
- "horizontal" part of moves
>
>> - simple alpha-beta
>> (yes! even no transposition tables and no move ordering)
>
> that's bad, try it cause it speeds up very well.
Well, I don't care that much about midgame quality,
since I want to improve endgame search. I'll be happy
if my program will perform better (in terms of nodes
visited) than edax and winzebra on ffo test suite.
> How big is nps (nodes per second) in your program? Vo reaches 1.65 mnps on
> pIII 1.0ghz
I don't have precise measures, since I am currently using multiuser
server (4 x PentiumII xeon 400) - in alpha-beta my program does about
250knps. But this measure is of course biased, since my alpha-beta
is *very* simple (no tt, no move ordering etc.)
>
>> - proof-number search with transposition tables and some other
>> extensions
>
> I read about that search, but actually don't remember the idea. Can you
> explain it?
This is a best-first kind of search. It means, that the whole
tree is kept in memory and algorithm iteratively selects "best" leaf
and expands it. pn-search associates with each node 2 numbers,
which (roughly speaking) are approximations of effort needed
to prove this node to be a win (proof number) and effort needed
to prove this node to be a lose (disproof number). There is
a well defined method to propagate these values throughout the tree.
When deciding "what to do next", pn-search tries to solve the whole
tree with least effort. This usually results in preferring narrow
subtrees over broad subtrees. The original pn-search algorithm does
not use evaluation function - it just expands the tree according
to its shape. The algorithm performs very well in solving chess
endgames. Variant of pn-search was used in solving go-moku.
See Victor Allis PhD Thesis for more details on pn-search.
>
> What is your (program) nick?
lanfor
> try ODK (Othello Developemtn Kit) by Chris Welty, it is very simple to use
> it
>
AFAIR it doesn't compile under linux.
you can keep it in the actual state updating during disk flipping
> - "horizontal" part of moves
what is it?
> This is a best-first kind of search. It means, that the whole
> tree is kept in memory and algorithm iteratively selects "best" leaf
> and expands it. pn-search associates with each node 2 numbers,
> which (roughly speaking) are approximations of effort needed
> to prove this node to be a win (proof number) and effort needed
> to prove this node to be a lose (disproof number). There is
> a well defined method to propagate these values throughout the tree.
> When deciding "what to do next", pn-search tries to solve the whole
> tree with least effort. This usually results in preferring narrow
> subtrees over broad subtrees. The original pn-search algorithm does
> not use evaluation function - it just expands the tree according
> to its shape. The algorithm performs very well in solving chess
> endgames. Variant of pn-search was used in solving go-moku.
1. Then you must read about BPIP - they use similar tree-expansion strategy
2. Is go-moku solved?? I read a paper "Searching for solutions in games and
Artificial intelligence" and where author told go-moku not solved yet
3. Does any professional chess program use pn-search?
4. Have you read a paper about conspiration numbers and cn-search?
>
> See Victor Allis PhD Thesis for more details on pn-search.
yes, yes I found it at home
>
> >
> > What is your (program) nick?
>
> lanfor
yes, there is "lanfor" on GGS but without any stats, maybe you played
unrated games? To play reated games you have to register, just send a letter
to the Igor ot Mic.
>
> > try ODK (Othello Developemtn Kit) by Chris Welty, it is very simple to
use
> > it
> >
> AFAIR it doesn't compile under linux.
well, maybe, it uses winsock.dll...
Good luck,
Dmitriy Iassenev.
>> - "horizontal" part of moves
>
> what is it?
You can flip discs in 8 directions - fliping to the left and to
the right can be done with a table look-up (there are only 3^8 row
configurations to consider).
Aarrrghhh, well maybe bit-board isn't such a speed-up :-)
I'll try normal table approach evantually...
Though OTOH I have to care about limiting memory usage...
(proof-number search consumes quite a lot of memory;
my extensions try to overcome this, but my algorithm
still uses considerably more memory than alpha-beta
- max_nodes_in_memory roughly equals 10% of total number
of visited nodes)
>
>> This is a best-first kind of search. It means, that the whole
>> tree is kept in memory and algorithm iteratively selects "best" leaf
>> and expands it. pn-search associates with each node 2 numbers,
>> which (roughly speaking) are approximations of effort needed
>> to prove this node to be a win (proof number) and effort needed
>> to prove this node to be a lose (disproof number). There is
>> a well defined method to propagate these values throughout the tree.
>> When deciding "what to do next", pn-search tries to solve the whole
>> tree with least effort. This usually results in preferring narrow
>> subtrees over broad subtrees. The original pn-search algorithm does
>> not use evaluation function - it just expands the tree according
>> to its shape. The algorithm performs very well in solving chess
>> endgames. Variant of pn-search was used in solving go-moku.
>
> 1. Then you must read about BPIP - they use similar tree-expansion strategy
They use probabilities of error in the evaluation function,
but there is no error in the evaluation function one uses
during endgame search.
Endgame search is all about proving the tree's root to be a win or
loss in the *least* possible number of visited nodes. Look at FFO40
- WinZebra wld-solves this one in 500knodes, edax in 70knodes
and my program can do it in 50knodes. Of course that only means
that my program is better in that particular board position, but
it also shows that there is room for improvement.
I think BPIP will not help in finding minimal (or at least close to
minimal) proof/disproof tree. BPIP's motivation when selecting a
leaf/leaves to expand is to maximally reduce uncertainty about which
move to make. PN-search motivation when selecting a leaf is to
proof/disproof the tree with least effort - that's what one wants in
endgame search. Of course BPIP will eventually end up with a
proof/disproof tree, but this will not necessarily be minimal tree.
> 2. Is go-moku solved?? I read a paper "Searching for solutions in games and
> Artificial intelligence" and where author told go-moku not solved yet
AFAIR go-moku 15x15 is proved to be a win for the first player.
This is described in "Go-moku and threat space search" by L.V. Allis,
H.J. van den Herik and M.P.H. Huntjens
> 3. Does any professional chess program use pn-search?
I don't know... :-(
> 4. Have you read a paper about conspiration numbers and cn-search?
I only know, that pn-search is based on these ideas.
Hmpf... that depends on what is the goal:
1. to make a good move in a given position
2. to tell whether the position is a win for the player to move
Selective search might be quite good at goal1,
but it is useless at goal2 - there will always
be doubt if the answer provided by selective search
is right or wrong.
All endgame solvers uses selective endgame search during FFO tests, because
move ordering is very useful and you can use the value being returned by
selective search as aspiration numbers for your solver.
Good luck,
Dmitriy Iassenev
yes, maybe you are right
> > 2. Is go-moku solved?? I read a paper "Searching for solutions in games
and
> > Artificial intelligence" and where author told go-moku not solved yet
>
> AFAIR go-moku 15x15 is proved to be a win for the first player.
> This is described in "Go-moku and threat space search" by L.V. Allis,
> H.J. van den Herik and M.P.H. Huntjens
but go-moku has restrictions for the first moves...first player can't go to
center cells, hmmm
Good luck,
Dmitriy Iassenev.
Hmm, vo's solver solves it for 10 secs on my machine.
Good luck,
Dmitriy Iassenev
I think, that the game you are reffering to is called Renju
How many nodes visited? :-)
yes
oh, yes right you are
I mean 10 secs for solving, not wld, here is stats:
if i run tournament program which plays on time then i have the
following results:
147 470 nodes for midgame routines and selective endgame
2 435 nodes for wld
23 324 703 nodes for solving
if i turn off midgame routines and selective endgame i have the
following
38 508 nodes for wld
33 584 861 nodes for solving
But, actually, it means nothing, because I can make node count even less if
I choose slower options :
FF TT
2 6 27 785 nodes for wld
3 6 21 888 nodes for wld
4 6 23 093 nodes for wld
5 6 26 104 nodes for wld
6 6 29 555 nodes for wld
7 6 38 508 nodes for wld
2 3 25 362 nodes for wld
3 3 20 150 nodes for wld
4 3 21 257 nodes for wld
5 3 24 106 nodes for wld
6 3 27 476 nodes for wld
7 3 37 346 nodes for wld
where FF - is the minimum depth for the fastest-first heuristics and TT is a
minimal depth at which I store nodes to the transposition table. Values FF =
7 and TT = 6 are default for VO, because it solves faster.
So, maybe you are seeking for solution in the incorrect direction?
Good luck,
Dmitriy Iassenev.
Can you explain what do you mean by aspiration numbers in WLD search?
As I do not find a statisfying answer to your question in this long
thread, here is a possible algorithm to approximate the stability of a
position:
1) compute stability for a single line, by simulating moves and
looking for unaffected discs. For example, within a line like
-OXOOOO-, the 'X' disc is stable while the 'O' discs are not. Let's
call such a stable disc, line-stable.
2) repeat (1) for every possible lines, ie the 8 horizontals,
verticals and all diagonals containing three squares or more.
3) Now consider each disc on the board and look if it was marked as
line-stable in every lines it can be flipped. Let's call such a disc
as semi-stable.
4) Label all semi-stable discs on the edge as stable.
5) Label all semi-stable discs for which all flipping lines are full
as stable.
6) If a semi-stable disc touch at least one stable disc of the same
colour in every lines it can be flipped, label it as stable.
7) repeat (6) until no new stable disc is found.
--
Richard