Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss
Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Games that stretch MCTS

46 views
Skip to first unread message

Cameron Browne

unread,
Feb 23, 2011, 2:37:56 PM2/23/11
to
Hi,

I'm doing a bit of work on Monte Carlo Tree Search (MCTS) which is a
new AI approach for move planning in games. The algorithm behaves
differently for games with different characteristics - some games are
well suited to it while some resist it - and I'm interested in finding
the outliers.

Can anyone suggest games of reasonable complexity that exhibit the
following properties:

1. Unusually slow to calculte legal moves for a given position.
2. Unusually slow to apply a move to a given board state.
3. Fast to find Nth legal move (e.g. Gomoku).
4. Slow to find Nth legal move (e.g. Triskelion?).
5. Fast to perform short lookahead (e.g. 2 ply) on a given position
(e.g. Gomoku again).
6. Slow to perform short lookahead on a given position.
7. Unusually low branching factor.
8. Unusually high branching factor.
9. Long move sequences for either player (e.g. Chinese Checkers).
10. Delayed reward, i.e. positional strength not clear until the end
of the game (e.g. Omega, maybe Othello?).
11. Immediate reward, i.e. immediate positional evaluation is a good
indication of who will win.
12. Random playouts give good estimate of who will win.
13. Random playouts give poor estimate of who will win.
14. Games in which a win or loss is never far away, i.e. high ratio of
leaf nodes in the search tree (Palago?).
15. Games in which a win or loss is very remote at the beginning but
gets closer as the game progresses (Hex?).

Basically any game that will affect the performance of MCTS/UCT by its
very nature.

Thanks,
Cameron

marks...@gmail.com

unread,
Feb 23, 2011, 4:27:14 PM2/23/11
to

On 23-Feb-2011, Cameron Browne <camb...@googlemail.com> wrote:

> I'm doing a bit of work on Monte Carlo Tree Search (MCTS)
> which is a new AI approach for move planning in games.

Monte Carlo has been around for many years. I think Flume would be well
suited for random playouts. You have to estimate how many moves a given
area will yield, and random playouts would be perfect for that.

Flume rule sheet:
http://www.marksteeregames.com/Flume_Go_rules.pdf

Flume is kind of an improved Dots and Boxes. Same idea but larger game
tree.

-Mark

Mark Steere Games
http://www.marksteeregames.com

christian

unread,
Feb 23, 2011, 5:33:32 PM2/23/11
to

Hi Cameron,

I'm not in the programming business, but because of the 2012 havannah
challenge I follow what's going on as best I can. Havannah never
lended itself for the traditional evaluation function, but MC
programming has made remarkable progress - I just lost a base-9 game
against Castro_Bot by Timo Ewalds
http://www.littlegolem.net/jsp/info/player.jsp?plid=21870

Wanderer by Richard Lorenz
http://www.littlegolem.net/jsp/info/player.jsp?plid=21160
and Deep Fork by Thomas Reinhardt
http://www.littlegolem.net/jsp/info/player.jsp?plid=20779
are also quite good.

Of course these bots excell on smaller boards. Base-8 to -10 they have
serious flaws, but choosing the wrong strategy can be quite lethal,
because they're very strong at exploiting forced tactical play. I just
got that rubbed in once more.

The general discussion and the participants (of whom I also shoud
mention Johannes Waldmann and Ingo Althofer) takes place at several
threads in the hex/havannah forum at Little Golem:
http://www.littlegolem.net/jsp/forum/forum2.jsp?forum=50

That's about all I can contribute to your comparative investigation at
the moment. Keep us posted on its progress please :)

Greg

unread,
Feb 23, 2011, 8:33:52 PM2/23/11
to

Hi Cameron,

We may have discussed this before, but I suggest looking at Cluster:
http://www.boardgamegeek.com/boardgame/22004/cluster

I experimented with a UCT player for that game and it did poorly. The
goal of the game is to have all pegs placed at a particular hole depth
and forming a connected group. I think UCT has difficulty with such a
specific goal, it's like finding a needle in a haystack. I don't
doubt that UCT could play the game and potentially play it well, but
it seems to me, that in order to do that, some form of domain
knowledge (e.g. a partial connectivity metric) would be necessary for
a game such as this. This observation could extend to the class of
games where reaching a specific "needle in the haystack" configuration
of pieces is the goal.

I do wonder about Chess and Chess like games as well. I haven't found
any literature concerning the applicability of UCT to those sorts of
games.

Let us know what you find out!

-- Greg

Cameron Browne

unread,
Feb 24, 2011, 3:25:29 AM2/24/11
to
Thanks guys!

Mark: Flume does indeed look applicable on a number of counts;
composite moves, large branching factor and the immediate reward of
each move is not obvious. Would you say that the complexity of the
moves gives it low in clarity?

Christian: Yes MCTS is revolutionising AI for a lot of games, we might
be seeing a paradigm shift away from alpha-beta search.

Greg: Thanks I'll check out Cluster. UCT tends to play poorly in most
cases unless tweaked or enhanced with domain knowledge - do you think
this was more pronounced than usual with Cluster?

I've also wondered why there have been no publications on UCT for
Chess - that's usually the first game that AI researchers attack!
Presumably the results are not good, and the Chess AIs these days are
so good that there'd be no comparison. I guess the more subgoals or
strategies that a game requires to be played well, then the more
domain knowledge has to be incorporated if UCT is to work; perhaps I
should be looking at the distinction between tactical and strategic
games.

Cameron

spielstein

unread,
Feb 24, 2011, 4:43:10 AM2/24/11
to
Hi Cameron,
the Mixtour applet uses UCT (about 30,000 iterations)

http://spielstein.com/games/mixtour/play

which is quite strong. You can adjust the AI strength (= num of
iterations) by a query param like

http://spielstein.com/games/mixtour/play?whiteStrength=5

I'm looking for ways to optimize the search now to reduce the
necessary amount of iterations. I'm very interested in your findings,
Cameron. Please keep us informed!

Thanks,
Dieter

Larry Wheeler

unread,
Feb 24, 2011, 5:51:54 AM2/24/11
to
You're probably aware of this already, but Bruce Abramson's book The_Expected-Outcome_Model_of_Two-Player_Games does some experimentation with chess (though his decision to always promote pawns to queens seems like a cheat). What would you say is the best reference for UCT?

marks...@gmail.com

unread,
Feb 24, 2011, 12:19:38 PM2/24/11
to

On 24-Feb-2011, Cameron Browne <camb...@googlemail.com> wrote:

> Mark: Flume does indeed look applicable on a number of
> counts; composite moves, large branching factor and the
> immediate reward of each move is not obvious. Would you
> say that the complexity of the moves gives it low in clarity?

Definitely there is low clarity in Flume, especially in the opening game,
exactly like Dots and Boxes. The tree is so vastly branched that there's no
point in wasting thought on it, in the opening. Flume opening play amounts
to randomly setting up the board.

Flume is considered to be my best game by some, including perhaps me. When
the low clarity, random setup phase ends and the thinking phase begins,
there's no deeper game.

Low clarity is not a general purpose mudball to sling at a game. Low
clarity is the flip side of high complexity, something crucial to making
certain games tick, like Flume.

I read Bruce Abramson's book many years ago and tried to apply it to Tanbo,
not a good game for Monte Carlo. But the concept stuck with me, and I had
already thought of Monte Carlo as being a potentially excellent technique
for cutting through the fog in Flume.

Greg

unread,
Feb 24, 2011, 12:26:10 PM2/24/11
to

Cameron wrote:

[Greg: Thanks I'll check out Cluster. UCT tends to play poorly in


most
cases unless tweaked or enhanced with domain knowledge - do you think

this was more pronounced than usual with Cluster? ]

Without adding significant domain knowledge, yes.

Games which UCT excel at seem to be reduction games where the moves
tend to diminish and/or there is some "force" driving the game to an
end.
As an observation, many (if not most) of Mark Steere's games seem to
have that property so perhaps Mark has created fertile territory for
UCT :).

Contrast that with positional games which can stalemate. I'll make an
arguable and brash statement that, in general, if a game can end in a
draw (typically with pieces cycling around the board to no end), then
it's likely not a good fit for UCT since I believe UCT will have a
hard time finding the narrow line(s) of play which leads to a win.
UCT "wants" the search space to be cone shaped tapering at the end, it
doesn't like "needle in the haystack" end games where the ratio of
wins & losses to the remaining search space is small.

Here's another thought. Positional games like Chess can be
problematic due to the heavy demand on move generation during
playouts. One could randomly select a piece and select from the
potential moves for the single piece. However, that leads to biases
since not all pieces have the same number of moves available to them
so pieces which have many moves will tend to be under represented in
the playouts. This applies to other games as well. An unbiased and
fast playout method is a huge bonus. As you well know, connection
games lend themselves to quick playout methods, but that advantage
doesn't appear to generalize to the broader class of games.
(e.g. a "random fillout" of pieces in Chess would probably not make
much sense)

-- Greg

Greg

unread,
Feb 24, 2011, 2:59:33 PM2/24/11
to
> -- Greg- Hide quoted text -
>
> - Show quoted text -

I forgot to mention that there's an Axiom Cluster game available for
download at the board game geek site.

-- Greg

christian

unread,
Feb 24, 2011, 4:35:15 PM2/24/11
to
On Feb 24, 6:26 pm, Greg <dreamwa...@yahoo.com> wrote:
>
> Games which UCT excel at seem to be reduction games where the moves
> tend to diminish and/or there is some "force" driving the game to an
> end.
> As an observation, many (if not most) of Mark Steere's games seem to
> have that property so perhaps Mark has created fertile territory for
> UCT :).
>
> Contrast that with positional games which can stalemate.  I'll make an
> arguable and brash statement that, in general, if a game can end in a
> draw (typically with pieces cycling around the board to no end), then
> it's likely not a good fit for UCT since I believe UCT will have a
> hard time finding the narrow line(s) of play which leads to a win.

Just your luck Mark, with a style based on hard finitude, cyclophobia
end the eradication of draws from the face of the universe, you end up
making cannon fodder for MC/UTC evaluation, while those damn cylic
misconceptions dodge the bullet ;-)

christian

unread,
Feb 24, 2011, 4:57:12 PM2/24/11
to
On Feb 24, 10:35 pm, christian <christ...@mindsports.nl> wrote:

* UCT

marks...@gmail.com

unread,
Feb 24, 2011, 6:05:59 PM2/24/11
to

On 24-Feb-2011, Greg <dream...@yahoo.com> wrote:

> As an observation, many (if not most) of Mark Steere's
> games seem to have that property so perhaps Mark has
> created fertile territory for UCT :).

Random search techniques are not suitable for most games because of the
enormous computer power required to generate enough random, complete plays
that meaningful patterns start to emerge in the data. Having an infinite
tree certainly doesn't help to this end. Even my finite games are mostly
unsuitable for random search, like Oust for example. Oust would be
resistant to all forms of AI, especially random search.

> UCT "wants" the search space to be cone shaped
> tapering at the end, it doesn't like "needle in the haystack"
> end games where the ratio of wins & losses to the
> remaining search space is small.

Random search isn't inherently afraid of haystack needles. All of the bad
strategies random play engenders cancel each other out, leaving intelligent
play outstanding. Again, it's just the raw computer power needed to
generate all that data. One essential requirement for random search is a
virtual nothing of a heuristic. In Flume, for example, you need to know
whether an even or an odd number of turns were required to fill in a given
area, on average. More specifically, there will be a distribution. If it's
a spike on one number, then it's settled. There are n turns available in
that region. If the spikes are divided between two numbers, then there's a
"switch" in the region - a formation which can require an even or an odd
number of turns to fill out, depending on how the fill-in is initiated. The
data might also indicate two switches in the region, etc.

marks...@gmail.com

unread,
Feb 24, 2011, 6:11:57 PM2/24/11
to

On 24-Feb-2011, christian <chri...@mindsports.nl> wrote:

> On Feb 24, 6:26 pm, Greg <dreamwa...@yahoo.com> wrote:
> >
> > As an observation, many (if not most) of Mark Steere's
> > games seem to have that property
>

> Just your luck Mark, with a style based on hard finitude,
> cyclophobia end the eradication of draws from the face
> of the universe, you end up making cannon fodder for
> MC/UTC evaluation, while those damn cylic
> misconceptions dodge the bullet ;-)

Sheer luck, with Flume :) Most of my games would not be well served by a
random search AI. I was already planning an intro to Flume tactics, and now
I'm planning to incorporate this Monte Carlo business into that. Stay
tuned....

christian

unread,
Feb 25, 2011, 5:18:49 AM2/25/11
to
On Feb 25, 12:05 am, markste...@gmail.com wrote:
>
>
> Oust would be resistant to all forms of AI, especially random search.
>
That's what I thought of havannah, but I was wrong: these bots may
have their flaws, but there's no denying they _do_ play havannah. I'd
be very interested in how Oust would hoold itself. Maybe you should
make it a challenge :)

Greg

unread,
Feb 25, 2011, 9:41:58 AM2/25/11
to

That's also what was thought of Go.

spielstein

unread,
Feb 25, 2011, 10:43:15 AM2/25/11
to
I recently discovered MC/UCT as a nice tool for testing game designs.
I can try out things and get some kind of result without bothering
your friends with testing sessions. But I never get any measured
values about how much fun a game actually is -- fortunately! I'm not
so much interested in creating games for machines. If an algorithm
plays a game well, what does that mean anyhow?

- Dieter


Greg

unread,
Feb 25, 2011, 1:00:33 PM2/25/11
to

I programmed Oust using "conventional search" (not UCT) and although
it beats me, it does not play a strong game. I would like to
eventually build a UCT based Oust.

-- Greg

marks...@gmail.com

unread,
Feb 25, 2011, 1:45:30 PM2/25/11
to

On 25-Feb-2011, christian <chri...@mindsports.nl> wrote:

> On Feb 25, 12:05 am, markste...@gmail.com wrote:
> >
> >
> > Oust would be resistant to all forms of AI, especially
> > random search.
> >

> I'd be very interested in how Oust would hoold itself. Maybe
> you should make it a challenge :)

The difficult thing about Oust AI is the counter-intuitive gameplay. Oust
is a game of annihilation, yet you seriously don't want to kill anything in
the opening game. This paradox would be especially problematic for random
search AI which, because of the massive playouts, needs to have a tiny
heuristic. The obvious tiny heuristic in a game of total annihilation is
"Who has more pieces remaining on the board?" This would lead to precisely
the wrong moves in Oust opening gameplay, causing the program to kill
everything within reach, ultimately leading to certain self defeat.

christian

unread,
Feb 25, 2011, 1:45:34 PM2/25/11
to

Oust is particularly interesting because it converges and diverges
again, like a sand timer. In the latter stage there are larger groups
involved and decisive lines would probalbly show up quite clearly in
random play outs, despite the divergence. In the converging stage the
results of play-outs may be less clear, but the branch density is
significantly lower than in Go or havannah, so the number of play-outs
could be accordingly higher. My guess is that MC/UCT is a better tool
than a conventional evaluation function and alpha/beta search, and
that Oust isn't anymore 'safe' that Go or havannah.

christian

unread,
Feb 25, 2011, 2:00:58 PM2/25/11
to
On Feb 25, 7:45 pm, markste...@gmail.com wrote:

> The obvious tiny heuristic in a game of total annihilation is
> "Who has more pieces remaining on the board?"  This would lead to precisely
> the wrong moves in Oust opening gameplay, causing the program to kill
> everything within reach, ultimately leading to certain self defeat.

MC is based on the endresults of playouts. It doesn't care about
'counter intuitive' or heuristics. This is from "Creating an Upper-
Confidence-Tree program for Havannah" by F. Teytaud and O. Teytaud:

"Upper Confidence Trees are the most straightforward choice when
implementing a Monte-Carlo Tree Search. The basic principle is as
follows. As long as there is time before playing, the algorithm
performs random simulations from a UCT tree leaf. These simulations
(playouts) are complete possible games, from the root of the tree
until the game is over, played by a random player playing both black
and white. In its most simple version, the random player, which is
used when in a situation s which is not in the tree, just plays
randomly and uniformly among legal moves in s; we did not implement
anything more sophisticated, as we are more interested in algorithms
than in heuristic tricks. When the random player is in a situation s
already in the UCT tree, then its choices depend on the statistics:
number of wins and number of losses in previous games, for each legal
move in s. The detailed formula used for choosing a move depending on
statistics is termed a bandit formula, discussed below. After each
simulation, the first situation of the simulated game that was not yet
in memory is archived in memory, and all statistics of numbers of wins
and numbers of defeats are updated in each situation in memory which
was traversed by the simulation."

http://ticc.uvt.nl/icga/acg12/proceedings/Contribution128.pdf

marks...@gmail.com

unread,
Feb 25, 2011, 3:03:34 PM2/25/11
to

On 25-Feb-2011, Greg <dream...@yahoo.com> wrote:
>
> I programmed Oust using "conventional search" (not
> UCT) and although it beats me, it does not play a strong
> game. I would like to eventually build a UCT based Oust.

Rive, Tanbo and Oust all have high stone recycling rates, even though
they're all finite of course. Because of this, they're my most difficult
games for AI in that order, with Rive being the most difficult due to its
massive recycling rate.

marks...@gmail.com

unread,
Feb 25, 2011, 3:23:32 PM2/25/11
to

On 25-Feb-2011, christian <chri...@mindsports.nl> wrote:

> "Upper Confidence Trees are the most straightforward

> choice when implementing a Monte-Carlo Tree Search..."

This is getting beyond my forte, designing games. Sure, in the Oust end
game, UCT and Monte Carlo and a variety of other AI techniques will work
fabulously. You'd never get to that point though because the same AI
techniques that work so well in the endgame would flop in the opening game.
Call it a hunch. The large stone recycling rate in Oust is going to present
a problem to AI.

spielstein

unread,
Feb 26, 2011, 5:44:57 AM2/26/11
to
> Rive, Tanbo and Oust all have high stone recycling rates, even though
> they're all finite of course. Because of this, they're my most difficult
> games for AI in that order, with Rive being the most difficult due to its
> massive recycling rate.

> Call it a hunch.  The large stone recycling rate in Oust is going to present
> a problem to AI.

Any evidence, Mark?

marks...@gmail.com

unread,
Feb 26, 2011, 9:42:52 AM2/26/11
to

On 26-Feb-2011, spielstein <spiel...@gmail.com> wrote:

> > Call it a hunch.  The large stone recycling rate in Oust
> > is going to present a problem to AI.
>
> Any evidence, Mark?

No, Dieter. That's why it's a hunch.

Greg

unread,
Feb 26, 2011, 10:27:13 AM2/26/11
to
On Feb 25, 3:23 pm, markste...@gmail.com wrote:

One difficulty I can see with programming MC Oust is that move
generation is expensive due to the group detection required for legal
placement and capture. Perhaps there's an efficient way to manage
this incrementally, but I haven't found a good way to do it.

marks...@gmail.com

unread,
Feb 26, 2011, 12:40:58 PM2/26/11
to

On 26-Feb-2011, Greg <dream...@yahoo.com> wrote:

> One difficulty I can see with programming MC Oust is
> that move generation is expensive due to the group
> detection required for legal placement and capture.

Yep, that too. Oust is reportedly difficult to program efficiently even
*without* the AI.

marks...@gmail.com

unread,
Feb 26, 2011, 1:05:29 PM2/26/11
to
One thing that seems to be escaping Dieter and Christian is the
characteristic that sets random search AI apart from other AI techniques -
the massive amount of game tree that needs to be traversed. You throw in
something like group size calculation at every turn, and you better have a
computer from Star Trek: Generation Z if you want to make any headway with
UCT in Oust.

Greg

unread,
Feb 26, 2011, 1:16:57 PM2/26/11
to
On Feb 26, 12:40 pm, markste...@gmail.com wrote:

My 11x11 Oust program searches 4 ply deep and examines over 100k moves
at 5 secs/move running on a "vintage" 2.8Ghz Pentium.

Greg

unread,
Feb 26, 2011, 1:20:12 PM2/26/11
to

And Deanna Troy says:

"I'm sensing a really interesting challenge here."
:)

marks...@gmail.com

unread,
Feb 26, 2011, 1:49:34 PM2/26/11
to

On 26-Feb-2011, Greg <dream...@yahoo.com> wrote:

> And Deanna Troy says:
>
> "I'm sensing a really interesting challenge here."
> :)

Yes, an Oust competition where the prize is squeezing Deanna Troi's pert
melons.

marks...@gmail.com

unread,
Feb 26, 2011, 2:22:55 PM2/26/11
to

On 26-Feb-2011, Greg <dream...@yahoo.com> wrote:

> On Feb 26, 12:40 pm, markste...@gmail.com wrote:
> > On 26-Feb-2011, Greg <dreamwa...@yahoo.com> wrote:
> >
> > > One difficulty I can see with programming MC Oust is
> > > that move generation is expensive due to the group
> > > detection required for legal placement and capture.
> >
> > Yep, that too.  Oust is reportedly difficult to program

> > efficiently even*without* the AI.


>
> My 11x11 Oust program searches 4 ply deep and
> examines over 100k moves at 5 secs/move running on a
> "vintage" 2.8Ghz Pentium.

Exactly. Now run that 80 ply deep and hope the universe doesn't turn cold
[shaking head, palms outstretched, etc.]

christian

unread,
Feb 26, 2011, 2:43:12 PM2/26/11
to

Apparently it is, irrespective of hopes to the contrary. But we're
talking alpha/beta search and an evaluation function here right?

Let's say UCT is relatively successful in Go and Havannah. In Havannah
positions must be checked on win/loss to establish whether or not the
game has ended. Meaning tests for ring bridge and fork at every move.
In Go groups must be checked for life/death, so group recognition is
an issue there too.

Yet the progress of these programs is impressive, as I've been able to
establish rather firmly by playing against the best Havannah bots.
Oust is a major challenge, but to say it's 'out of reach' ... I
wouldn't bet on it :)

marks...@gmail.com

unread,
Feb 26, 2011, 2:55:51 PM2/26/11
to

On 26-Feb-2011, christian <chri...@mindsports.nl> wrote:

> Oust is a major challenge, but to say it's 'out of reach' ... I
> wouldn't bet on it :)

I wouldn't either. I'm just positing that UCT is the wrong approach for
Oust. I don't know what the right approach will be but it'll take someone
who understands the game, i.e. has played a few dozen times, and who's very
skilled at AI programming. Out of reach? No. Hard to reach :D

Greg

unread,
Feb 26, 2011, 2:59:18 PM2/26/11
to

The 4 ply search is for a full width alpha-beta search so it
propagates exponentially. For an off the cuff back of the envelope
calculation, let's say there's on average 40 moves/player on the 11x11
board. The 4 ply search above corresponds to roughly 40^4 = 2560k
moves generated. Due to other factors (e.g. alpha-beta pruning, not
actually completing the 4th ply), let's knock that down further to a
conservative 1000k moves generated. Assume that a single UCT playout
is typically 80 moves deep from the opening. 1000k/80 would equate to
approx 13k UCT playouts. Not huge, but not trivial either, and Axiom
doesn't result in the fastest possible speed due to it being a general
game system.

Now, if move generation was made "smarter" via incremental group
management, that number could rise dramatically. Also, there might be
other insights which trade less accurate playouts with a rise in
playouts that are advantageous.

IMO, no one (including me) has made a compelling argument for or
against the viability of a UCT player for Oust. An existence proof
would help.

-- Greg


marks...@gmail.com

unread,
Feb 26, 2011, 4:21:28 PM2/26/11
to

On 26-Feb-2011, Greg <dream...@yahoo.com> wrote:

> IMO, no one (including me) has made a compelling
> argument for or against the viability of a UCT player for
> Oust. An existence proof would help.

Ok. What's an existence proof?

Greg

unread,
Feb 26, 2011, 5:32:21 PM2/26/11
to
On Feb 26, 4:21 pm, markste...@gmail.com wrote:

> On 26-Feb-2011, Greg <dreamwa...@yahoo.com> wrote:
>
> > IMO, no one (including me) has made a compelling
> > argument for or against the viability of a UCT player for
> > Oust.  An existence proof would help.
>
> Ok.  What's an existence proof?  

I'm fairly sure I've abused the term. I'm just saying that we could
use at least one example of a UCT based Oust program to draw some
conclusions from, much like what Christian has tried to do with
Havannah.

christian

unread,
Feb 27, 2011, 5:17:49 AM2/27/11
to

Actually there are Castro, Deep Fork, Wanderer, Gambler, Ring von
Fehler and Shakti (a havannah bot that has the name of one of my other
games), to name a few, and the conclusion is that UCT is a stronger
approach (in havannah!) than an alpha/beta search based on a
traditional evaluation function.

Of course it's far above my head in terms of what these guys - Timo
Ewalds, Thomas Reinhardt, Richard Lorenz, prof. Johannes Waldmann,
prof. Ingo Althofer, Richard Pijl, Fabien & Olivier Teytaud and others
- are actually doing, because they're approaching it from different
angles and with different results. UCT isn't a single straight
road,obviously, and the programs differ in strenght and style.
For what it's worth, Castro (Timo Ewalds) is currently the one I fear
the most. Losing base-9 because of a bad plan and not being able to
turn the tables is a wake-up call.

Greg

unread,
Feb 27, 2011, 11:16:00 AM2/27/11
to

[UCT is a stronger approach (in havannah!) than an alpha/beta search
based on a traditional evaluation function.]

That's not surprising at all given past success in Go and other
connection games.

[UCT isn't a single straight road, obviously, and the programs differ
in strenght and style.]

It's also not surprising that it's being approached from different
angles, the exciting part is that the technique is ripe for innovation
when applied to new games. The term "Never say never" comes to mind
when one considers: Samuel's Checkers-playing Program, Chess 4.0, Deep
Blue, Hexy, Queen-Bee, Chinook, (I'm certain I've left off some key
ones) etc...

0 new messages