Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Another Murat mutant cube skill experiment based on game stages

38 views
Skip to first unread message

MK

unread,
Jan 13, 2024, 4:36:09 PMJan 13
to
At last I found the time to edit and post the mutant
experiment I've been mentioning for almost a year.

This article has two sections, an "introduction" about my
own experiences/experiments against bots and how the
Axel's Murat mutant cube skill experiments came about,
(readers already familiar with those may skip this section);
and a "description" of this new Murat mutant cube skill
experiment in detail.

INTRODUCTION

Over the past 25+ years, I have played tens of thousands
of games/matches against Jellyfish, Snowie, GnuBG and
XG, mostly for fun and a couple of thousands as serious
experiments of long sessions that I recorded and shared
on my web site. (See https://montanaonline.net/backgammon).

Since nobody seemed to trust my results, I kept urging
people to do their own experiments but unfortunately I
couldn't advise anyone on how to go about it because
I myself couldn't quite formulate what was I doing right.

Having done well against bots early on, without knowing
anything about equities, doubling points, etc. I thought
that I should keep doing what was working well for me,
(at least against bots, even if never tested against human
"giants"), and purposefully refused to learn those "skills".

For example, I believed that the "cube skill theory" was
an over touted elaborate bullshit that could be defeated
by defying it and forcing games/matches to be played
out to the end, turning them into virtually cubeless ones,
(thus also longer lasting ones, giving luck more time to
even out and allowing skill to emerge more decisively).
Hence, I was making cube decisions based on "how much play
still left in the game", aggressively in the early stages
and cautiously in the late stages of games.

Because it would be unreasonable/impossible to expect
to reproduce my results by playing like me or by making
a bot play like me, I proposed a very simple mutant bot
experiment with the crudest cube strategy to start with,
i.e. doubling at >50% MWC and taking at >0% MWC. (Since,
if that could cast even a small doubt in the so-called
"cube strategy theory", then more complex mutant cube
strategies could be tried later on).

Finally, Axel ran that crude experiment a couple of years
ago and then ran even more similar experiments of his own.
Anyone interested, can find and read the many threads
about these experiments from the RGB archives.

===================================

DESCRIPTION

In this mutant cube experiment, a game is divided into 5
stages: opening, early, middle, late and closing, with each
assigned my arbitrary fartoffski double and take points.

Based on 6 million rated, finished games as of 1-13-2024
at https://zooescape.com/backgammon-stats.pl
54 will be accepted as the average rolls/moves in a game
and will be divided among the above 5 stages as follows:

Opening: 6 rolls/moves (1 thru 6), double >50%, take >0%
Early: 12 rolls/moves (7 thru 18), double >55%, take >5%
Middle: 18 rolls/moves (19 thru 36), double >60%, take >10%
Late: 12 rolls/moves (37 thru 48), double >65%, take >15%
Closing: 6 rolls/moves (49 thru 54), double >70%, take >20%

The mutant bot (script) will keep a count of the rolls in
order to determine the current stage of the game.

So, for example: If the mutant bot is on roll within the early
game stage, (let's say 14th roll), has access to the cube and
its winning chance is >55%, it will double. If the mutant bot
is doubled within the late game stage, (let's say 45th roll),
and its winning chance is >15%, it will take, else it will drop.

I predict that my above fartoffski mutant cube strategy will
do at least as good as Noo-BG World-Class, in both games
and points won.

If Axel or others decide to run this experiment, they need to
at least predict games and points won beforehand to show how
confident they are with the current bots. Of course, anyone
can make predictions even if they don't run the experiment and
compare them to the results of experiments done by others.

After the experiment, the following stats should be published:

- games won/lost (less important)
- total points won/lost (more important)
- ppg won/lost
- pwppp won/lost
- cube error rates
- overall error rates
- ELO's

And, of course, the games should be recorded/saved in JF or
other compact txt format and be publicly shared.

I'm not asking anyone (who doesn't work for me;) to do this
experiment. I'm just proposing it. If nobody does it, I may try
to spend a reasonable amount of time to learn Python script
and do it myself, especially if I get some help from someone
who has already done a similar script that I can adapt. I may
even consider paying for a working script that will accomplish
this, if it has to be the only way. I think it will be worth it.

MK
0 new messages