Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Quiescence search problems (solved)

16 views
Skip to first unread message

MANOHARARAJAH VALAVAN

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

remember the post I made earlier about having a ton of nodes in the
quiescence search procedure?

I have found the problem: it is a silly bug in the move sorting algorithm.
I am using a simple bubble sort procedure to arrange the moves - I swap the
moves but I forgot to swap the move scores. Thus the sort just sits there
and swaps the first two elements over and over again.

Now the program is searching at normal "node levels"....


--
-------------------------------------------------------------------------------
man...@ecf.utoronto.ca | 3rd year Comp Eng., University of Toronto
Valavan Manohararajah |
-------------------------------------------------------------------------------

Tom Kerrigan

unread,
May 2, 1996, 3:00:00 AM5/2/96
to

Bummer of a bug.

Question (more of a hint really): do you need all the moves sorted before you
search one? Do you ever need all the moves sorted, considering how many you end up
searching due to alpha-beta?

Cheers,
Tom

_______________________________________________________________________________
Tom Kerrigan kerr...@frii.com O-

All Finagle Laws may be bypassed by learning the simple art of doing
without thinking.

MANOHARARAJAH VALAVAN

unread,
May 2, 1996, 3:00:00 AM5/2/96
to

In article <4m92br$e...@europa.frii.com>,

I probably don't need to sort out all the moves... but I just started on the
program recently, so I am just trying to get it working (without bugs) first,
then will concentrate on tweaking the hell out of it.

You are quite right, with alpha-beta, I don't need to sort out the moves...
I will look in to this soon.

Simon Read

unread,
May 5, 1996, 3:00:00 AM5/5/96
to

MV: man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN)
MV> I am using a simple bubble sort procedure to arrange the moves [ ... ]
-->
What?? Bubble sort isn't all that efficient! It's O(n^2). If you are only
sorting a small number of items, like less than 10, you can use an O(n^2)
sort but there are better ones than buble sort. Try straight
insertion sort which needs many times fewer exchanges of elements than
bubble sort. The algorithm is in fact a little simpler than bubble sort.

Simple guess at the algorithm:

N items to sort; attempt to put largest (highest score) first

for J=1 to N-1
L=J ; largest so far...
for I=J+1 to N
if item(I)>item(L) then L=I
next I
if L>J then exchange items L and J ; The alternative is that L=J, so
next J ; no exchange is necessary

I think that should do it. The reason it's better than the bubble sort
is that the bubble sort has the exchanges _inside_ the inner loop. This
sort has moved the exchange to outside the inner loop. The bubble sort
has about N times more exchanges.

Simon


MANOHARARAJAH VALAVAN

unread,
May 6, 1996, 3:00:00 AM5/6/96
to

In article <4mi4vp$5...@yama.mcc.ac.uk>,

Simon Read <s.r...@cranfield.ac.uk> wrote:
>MV: man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN)
>MV> I am using a simple bubble sort procedure to arrange the moves [ ... ]
>-->
>What?? Bubble sort isn't all that efficient! It's O(n^2). If you are only
>sorting a small number of items, like less than 10, you can use an O(n^2)
>sort but there are better ones than buble sort. Try straight
>insertion sort which needs many times fewer exchanges of elements than
>bubble sort. The algorithm is in fact a little simpler than bubble sort.

Actually, I was aware that bubble sort is O(n*n), but I was kind of lazy,
and besides I just started to write the program....I wasn't too worried about
efficiency (I just coded the first algorithm that came to mind). Efficiency
will come later after I have real good evaluation function and a bugless
search procedure.

>
>Simple guess at the algorithm:
>
>N items to sort; attempt to put largest (highest score) first
>
>for J=1 to N-1
> L=J ; largest so far...
> for I=J+1 to N
> if item(I)>item(L) then L=I
> next I
> if L>J then exchange items L and J ; The alternative is that L=J, so
>next J ; no exchange is necessary
>
>I think that should do it. The reason it's better than the bubble sort
>is that the bubble sort has the exchanges _inside_ the inner loop. This
>sort has moved the exchange to outside the inner loop. The bubble sort
>has about N times more exchanges.
>
>Simon
>

Bruce Moreland

unread,
May 6, 1996, 3:00:00 AM5/6/96
to

In article <Dqzp2...@ecf.toronto.edu>, man...@ecf.toronto.edu says...

>
>In article <4mi4vp$5...@yama.mcc.ac.uk>,
>Simon Read <s.r...@cranfield.ac.uk> wrote:
>>MV: man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN)
>>MV> I am using a simple bubble sort procedure to arrange the moves [
... ]
>>-->
>>What?? Bubble sort isn't all that efficient! It's O(n^2). If you are
only
>>sorting a small number of items, like less than 10, you can use an
O(n^2)
>>sort but there are better ones than buble sort. Try straight
>>insertion sort which needs many times fewer exchanges of elements than
>>bubble sort. The algorithm is in fact a little simpler than bubble sort.
>
>Actually, I was aware that bubble sort is O(n*n), but I was kind of lazy,
>and besides I just started to write the program....I wasn't too worried
about
>efficiency (I just coded the first algorithm that came to mind).
Efficiency
>will come later after I have real good evaluation function and a bugless
>search procedure.

I started writing a chess program because I was disgusted by the way that
gnu did this "sort". It went through the candidate move list, and picked
the one that had the best key. The next time it needed a candidate move,
it did the same thing.

I figured I'd use a super-spiffy quick-sort I had. I got it out of a data
structures book, and figured that I'd done a good job implementing it. It
quick-sorted down to a fixed partition size, then did an insertion sort
from there. I figured that this would be super-smooth.

It took me a while to figure out that I was an idiot, at which point I
changed mine to work the same way gnuchess did.

The deal here is that if you have 35 candidate moves, you won't actually
look at all of them in most cases. You'll just look at one of them in a
lot of cases, in other cases you'll just look at a few of them, and in the
cases where you end up looking at all of them, after you've done your best
with the first few you can safely look at the rest in any order.

I don't care how good your quick-sort is, if N = 35, NlogN isn't going to
beat 3N.

bruce


Peter Osterlund

unread,
May 6, 1996, 3:00:00 AM5/6/96
to

On 5 May 1996, Simon Read wrote:

> What?? Bubble sort isn't all that efficient! It's O(n^2). If you are only
> sorting a small number of items, like less than 10, you can use an O(n^2)
> sort but there are better ones than buble sort. Try straight
> insertion sort which needs many times fewer exchanges of elements than
> bubble sort. The algorithm is in fact a little simpler than bubble sort.
>

> Simple guess at the algorithm:
>
> N items to sort; attempt to put largest (highest score) first
>
> for J=1 to N-1
> L=J ; largest so far...
> for I=J+1 to N
> if item(I)>item(L) then L=I
> next I
> if L>J then exchange items L and J ; The alternative is that L=J, so
> next J ; no exchange is necessary

This algorithm is actually called selection sort. An additional advantage
with this algorithm is that you don't have to sort the entire move list in
one sweep. Instead, just select the best move, try it, and, if it
generates a beta cutoff, you will never need to sort the remaining moves.

+-------------------------------------------------------------------------+
| Peter Österlund Email: peter.o...@mailbox.swipnet.se |
| Sköndalsvägen 35 f90...@nada.kth.se |
| S-128 66 Sköndal Phone: +46 8 942647 |
| Sweden |
+-------------------------------------------------------------------------+

Chris Whittington

unread,
May 7, 1996, 3:00:00 AM5/7/96
to

man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN) wrote:
>
> In article <4mi4vp$5...@yama.mcc.ac.uk>,
> Simon Read <s.r...@cranfield.ac.uk> wrote:
> >MV: man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN)
> >MV> I am using a simple bubble sort procedure to arrange the moves [ ... ]
> >-->
> >What?? Bubble sort isn't all that efficient! It's O(n^2). If you are only
> >sorting a small number of items, like less than 10, you can use an O(n^2)
> >sort but there are better ones than buble sort. Try straight
> >insertion sort which needs many times fewer exchanges of elements than
> >bubble sort. The algorithm is in fact a little simpler than bubble sort.
>
> Actually, I was aware that bubble sort is O(n*n), but I was kind of lazy,
> and besides I just started to write the program....I wasn't too worried about
> efficiency (I just coded the first algorithm that came to mind). Efficiency
> will come later after I have real good evaluation function and a bugless
> search procedure.
>

The 51st law of chess programs says "leaving the cleaning up and
efficiency making till later means it will never get done"

Always one bit more to get the "really good eval"....

best regards,


Chris Whittington

Robert Hyatt

unread,
May 7, 1996, 3:00:00 AM5/7/96
to

In article <Pine.AMI.3.93.960505225905.126738064A-100000@localhost>,
Peter Osterlund <peter.o...@mailbox.swipnet.se> wrote:
-->On 5 May 1996, Simon Read wrote:
-->

-->> What?? Bubble sort isn't all that efficient! It's O(n^2). If you are only
-->> sorting a small number of items, like less than 10, you can use an O(n^2)
-->> sort but there are better ones than buble sort. Try straight
-->> insertion sort which needs many times fewer exchanges of elements than
-->> bubble sort. The algorithm is in fact a little simpler than bubble sort.
-->>
-->> Simple guess at the algorithm:
-->>
-->> N items to sort; attempt to put largest (highest score) first
-->>
-->> for J=1 to N-1
-->> L=J ; largest so far...
-->> for I=J+1 to N
-->> if item(I)>item(L) then L=I
-->> next I
-->> if L>J then exchange items L and J ; The alternative is that L=J, so
-->> next J ; no exchange is necessary
-->
-->This algorithm is actually called selection sort. An additional advantage
-->with this algorithm is that you don't have to sort the entire move list in
-->one sweep. Instead, just select the best move, try it, and, if it
-->generates a beta cutoff, you will never need to sort the remaining moves.
-->

The drawback to this algorithm is that it is going to execute the same
number of iterations for a given list size, no matter what the state of
the initial order is. Suppose you do as I do in Crafty, and generate
captures using the least valuable piece first, then the next, etc. It
turns out that quite often, the list is nearly correct. There the old
bubble sort is O(n) as one pass will terminate it. only when the best
value is near the bottom does it reach the 2x slower value that would
make another algorithm better. Of course, the fact that the list is
typically only 2-3 moves only helps...


--
Robert Hyatt Computer and Information Sciences
hy...@cis.uab.edu University of Alabama at Birmingham
(205) 934-2213 115A Campbell Hall, UAB Station
(205) 934-5473 FAX Birmingham, AL 35294-1170

David Spencer

unread,
May 7, 1996, 3:00:00 AM5/7/96
to

In article <4mi4vp$5...@yama.mcc.ac.uk>,
Simon Read <s.r...@cranfield.ac.uk> wrote:
>MV: man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN)
>MV> I am using a simple bubble sort procedure to arrange the moves [ ... ]
>-->

I think it's better *not* to sort - just to keep on finding the best move.
The reason is that with good move ordering, the "best" move will usually
cause a cutoff, thus you save the cost of a sort.

>What?? Bubble sort isn't all that efficient! It's O(n^2). If you are only

>sorting a small number of items, like less than 10, you can use an O(n^2)

>sort but there are better ones than buble sort. Try straight

>insertion sort which needs many times fewer exchanges of elements than

>bubble sort. The algorithm is in fact a little simpler than bubble sort.
>

>Simple guess at the algorithm:
>

>N items to sort; attempt to put largest (highest score) first
>

>for J=1 to N-1
> L=J ; largest so far...
> for I=J+1 to N

> if item(I)>item(L) then L=I

> next I


> if L>J then exchange items L and J ; The alternative is that L=J, so

>next J ; no exchange is necessary
>

Marcel van Kervinck

unread,
May 8, 1996, 3:00:00 AM5/8/96
to

Bruce Moreland (brucemo) wrote:

> The deal here is that if you have 35 candidate moves, you won't actually
> look at all of them in most cases. You'll just look at one of them in a
> lot of cases, in other cases you'll just look at a few of them, and in the
> cases where you end up looking at all of them, after you've done your best
> with the first few you can safely look at the rest in any order.

> I don't care how good your quick-sort is, if N = 35, NlogN isn't going to
> beat 3N.

It may sound ridiculous, but I use an augmented *heapsort* in my
program. Why? The first step of heapsort is an O(N) 'buildheap',
which also gives the first move. Every time a new move is needed,
it takes O(ln n) to 'rebuild' the broken heap, and select the
next move. So we have:

O(N) if 1st move yields a cutoff (just like gnu)
O(N ln N) if all moves are needed (just like quicksort)

I profiled this approach in my program and it is actually
faster than the gnu way.

The only alternative is to predict the node-type before sorting.
If the node is likely to fail low, do quicksort just once,
If the node is likely to fail high, use augmented selection sort.

Marcel
-- _ _
_| |_|_|
|_ |_ Marcel van Kervinck
|_| bue...@urc.tue.nl

Vincent Diepeveen

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

In <4mllt3$i...@news.microsoft.com> brucemo (Bruce Moreland) writes:

>It took me a while to figure out that I was an idiot, at which point I
>changed mine to work the same way gnuchess did.

I wrote 'heapsort' to sort the moves.

After a while i discovered that this meant that you ALWAYS had
to sort moves.

Currently i use about the same function GNUchess is using (only i
rewrote it for my own datastructure, principle is the same: selection sort?).
It only needs O(n) for every move you want to play. If you happen
to need the whole list you need O(n*n).

This function 'PickMove' takes about 0.5% of the whole systemtime.
I use it for every move my program ever makes, also in q-search.

>The deal here is that if you have 35 candidate moves, you won't actually
>look at all of them in most cases. You'll just look at one of them in a
>lot of cases, in other cases you'll just look at a few of them, and in the
>cases where you end up looking at all of them, after you've done your best
>with the first few you can safely look at the rest in any order.

>In article <Dqzp2...@ecf.toronto.edu>, man...@ecf.toronto.edu says...
>>


>>In article <4mi4vp$5...@yama.mcc.ac.uk>,
>>Simon Read <s.r...@cranfield.ac.uk> wrote:
>>>MV: man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN)
>>>MV> I am using a simple bubble sort procedure to arrange the moves [
>... ]
>>>-->

>>>What?? Bubble sort isn't all that efficient! It's O(n^2). If you are
>only
>>>sorting a small number of items, like less than 10, you can use an
>O(n^2)
>>>sort but there are better ones than buble sort. Try straight
>>>insertion sort which needs many times fewer exchanges of elements than
>>>bubble sort. The algorithm is in fact a little simpler than bubble sort.
>>

>>Actually, I was aware that bubble sort is O(n*n), but I was kind of lazy,
>>and besides I just started to write the program....I wasn't too worried
>about
>>efficiency (I just coded the first algorithm that came to mind).
>Efficiency
>>will come later after I have real good evaluation function and a bugless
>>search procedure.
>

>I started writing a chess program because I was disgusted by the way that
>gnu did this "sort". It went through the candidate move list, and picked
>the one that had the best key. The next time it needed a candidate move,
>it did the same thing.
>
>I figured I'd use a super-spiffy quick-sort I had. I got it out of a data
>structures book, and figured that I'd done a good job implementing it. It
>quick-sorted down to a fixed partition size, then did an insertion sort
>from there. I figured that this would be super-smooth.
>

>In article <Dqzp2...@ecf.toronto.edu>, man...@ecf.toronto.edu says...
>>


>>In article <4mi4vp$5...@yama.mcc.ac.uk>,
>>Simon Read <s.r...@cranfield.ac.uk> wrote:
>>>MV: man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN)
>>>MV> I am using a simple bubble sort procedure to arrange the moves [
>... ]
>>>-->

>>>What?? Bubble sort isn't all that efficient! It's O(n^2). If you are
>only
>>>sorting a small number of items, like less than 10, you can use an
>O(n^2)
>>>sort but there are better ones than buble sort. Try straight
>>>insertion sort which needs many times fewer exchanges of elements than
>>>bubble sort. The algorithm is in fact a little simpler than bubble sort.
>>

>>Actually, I was aware that bubble sort is O(n*n), but I was kind of lazy,
>>and besides I just started to write the program....I wasn't too worried
>about
>>efficiency (I just coded the first algorithm that came to mind).
>Efficiency
>>will come later after I have real good evaluation function and a bugless
>>search procedure.
>

>I started writing a chess program because I was disgusted by the way that
>gnu did this "sort". It went through the candidate move list, and picked
>the one that had the best key. The next time it needed a candidate move,
>it did the same thing.
>
>I figured I'd use a super-spiffy quick-sort I had. I got it out of a data
>structures book, and figured that I'd done a good job implementing it. It
>quick-sorted down to a fixed partition size, then did an insertion sort
>from there. I figured that this would be super-smooth.
>

>>In article <4mi4vp$5...@yama.mcc.ac.uk>,
>>Simon Read <s.r...@cranfield.ac.uk> wrote:
>>>MV: man...@ecf.toronto.edu (MANOHARARAJAH VALAVAN)
>>>MV> I am using a simple bubble sort procedure to arrange the moves [
>... ]
>>>-->

>>>What?? Bubble sort isn't all that efficient! It's O(n^2). If you are
>only
>>>sorting a small number of items, like less than 10, you can use an
>O(n^2)
>>>sort but there are better ones than buble sort. Try straight
>>>insertion sort which needs many times fewer exchanges of elements than
>>>bubble sort. The algorithm is in fact a little simpler than bubble sort.
>>

>>Actually, I was aware that bubble sort is O(n*n), but I was kind of lazy,
>>and besides I just started to write the program....I wasn't too worried
>about
>>efficiency (I just coded the first algorithm that came to mind).
>Efficiency
>>will come later after I have real good evaluation function and a bugless
>>search procedure.
>

>I started writing a chess program because I was disgusted by the way that
>gnu did this "sort". It went through the candidate move list, and picked
>the one that had the best key. The next time it needed a candidate move,
>it did the same thing.
>
>I figured I'd use a super-spiffy quick-sort I had. I got it out of a data
>structures book, and figured that I'd done a good job implementing it. It
>quick-sorted down to a fixed partition size, then did an insertion sort
>from there. I figured that this would be super-smooth.

I agree 100% with this.

>I don't care how good your quick-sort is, if N = 35, NlogN isn't going to
>beat 3N.

In fact if you look after Q-search you usually don't select a move
(forward pruning), or you select usually only 1. The latter case gives
you N.

So i guess that it is even less than 3N. I think it is about 2N if you also
use this function in Q-search.
Usually all moves are about randomly ordered, so i DID figure out
that insitu heapsort worked better or equal than quicksort!

>bruce

Vincent Diepeveen
vdie...@cs.ruu.nl
--
+--------------------------------------+
|| email : vdie...@cs.ruu.nl ||
|| Vincent Diepeveen ||
+======================================+

Vincent Diepeveen

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

In <4mpvt4$3...@tuegate.tue.nl> bue...@asterix.urc.tue.nl (Marcel van Kervinck) writes:

>Bruce Moreland (brucemo) wrote:
>
>> The deal here is that if you have 35 candidate moves, you won't actually
>> look at all of them in most cases. You'll just look at one of them in a
>> lot of cases, in other cases you'll just look at a few of them, and in the
>> cases where you end up looking at all of them, after you've done your best
>> with the first few you can safely look at the rest in any order.
>

>> I don't care how good your quick-sort is, if N = 35, NlogN isn't going to
>> beat 3N.

>It may sound ridiculous, but I use an augmented *heapsort* in my


>program. Why? The first step of heapsort is an O(N) 'buildheap',
>which also gives the first move. Every time a new move is needed,
>it takes O(ln n) to 'rebuild' the broken heap, and select the
>next move. So we have:

> O(N) if 1st move yields a cutoff (just like gnu)

How much 'mov' does this take, and how much systemtime?
How big is your datastructure: how many int's do you use to represent
a move? Every move in my program needs several ints average.

> O(N ln N) if all moves are needed (just like quicksort)

>I profiled this approach in my program and it is actually
>faster than the gnu way.

Do you also use this in Q-search?

>The only alternative is to predict the node-type before sorting.
>If the node is likely to fail low, do quicksort just once,
>If the node is likely to fail high, use augmented selection sort.

> Marcel
>-- _ _
> _| |_|_|
> |_ |_ Marcel van Kervinck
> |_| bue...@urc.tue.nl

Vincent Diepeveen

Robert Hyatt

unread,
May 9, 1996, 3:00:00 AM5/9/96
to

In article <4mso8r$6...@krant.cs.ruu.nl>,
Vincent Diepeveen <vdie...@cs.ruu.nl> wrote:
-->In <4mpvt4$3...@tuegate.tue.nl> bue...@asterix.urc.tue.nl (Marcel van Kervinck) writes:
-->
-->>Bruce Moreland (brucemo) wrote:
-->>
-->>> The deal here is that if you have 35 candidate moves, you won't actually
-->>> look at all of them in most cases. You'll just look at one of them in a
-->>> lot of cases, in other cases you'll just look at a few of them, and in the
-->>> cases where you end up looking at all of them, after you've done your best
-->>> with the first few you can safely look at the rest in any order.
-->>
-->>> I don't care how good your quick-sort is, if N = 35, NlogN isn't going to
-->>> beat 3N.
-->
-->>It may sound ridiculous, but I use an augmented *heapsort* in my
-->>program. Why? The first step of heapsort is an O(N) 'buildheap',
-->>which also gives the first move. Every time a new move is needed,
-->>it takes O(ln n) to 'rebuild' the broken heap, and select the
-->>next move. So we have:
-->
-->> O(N) if 1st move yields a cutoff (just like gnu)
-->
-->How much 'mov' does this take, and how much systemtime?
-->How big is your datastructure: how many int's do you use to represent
-->a move? Every move in my program needs several ints average.

In crafty, a move is 1 int long, 21 bits to be precise.

-->
-->> O(N ln N) if all moves are needed (just like quicksort)
-->
-->>I profiled this approach in my program and it is actually
-->>faster than the gnu way.
-->
-->Do you also use this in Q-search?
-->

The main issue that favors bubble sort, is it is quicker than the
rest if the list is already sorted. If you use bitmaps, you will
likely get pretty good ordering in most cases, because you can
generate captures in a more-or-less logical order by looking at
less valuable capturing pieces first. I ran several tests in Crafty
and never found a sort that was any better (faster), including the
famous N*N/2 algorithm above. if the move list was somehow ordered
inversely to start, this would be quite a bit different. However,
for reasonable ordering, bubble sort exits after a pass where no
values were moved, which for non-random ordered list is usually
something less than N*N/2 tests...

If you use offset and don't choose which piece captures first, your
milage may vary. Anyone interested can certainly play with Crafty's
sorting. It's done in two places, next.c (NextMove()) and nextc.c
(NextCapture()). The code is exactly the same in both places and
is easy to understand and modify. Convince yourself that the bubble-sort
is not as bad as reputed if you know something about the order of the
list before you start...

Marcel van Kervinck

unread,
May 13, 1996, 3:00:00 AM5/13/96
to

Vincent Diepeveen (vdie...@cs.ruu.nl) wrote:

> >It may sound ridiculous, but I use an augmented *heapsort* in my

> >program. Why? The first step of heapsort is an O(N) 'buildheap',

> >which also gives the first move. Every time a new move is needed,

> >it takes O(ln n) to 'rebuild' the broken heap, and select the

> >next move. So we have:

> > O(N) if 1st move yields a cutoff (just like gnu)

> How much 'mov' does this take, and how much systemtime?


> How big is your datastructure: how many int's do you use to represent

> a move? Every move in my program needs several ints average.

Two 32 bit words. I try to avoid generating non-captures.
Timing I don't exactly remember, but it is near 7%
for build_heap + restore_heap. Current state is somewhat
less than 1000 instructions/node total, which indicates
70 instructions spent in sorting. The code is heavily
hand optimized assembly for this. I have to reprofile
it to get exact figures.

> > O(N ln N) if all moves are needed (just like quicksort)

> >I profiled this approach in my program and it is actually


> >faster than the gnu way.

> Do you also use this in Q-search?

Yes.

0 new messages