Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Tiebreakers

0 views
Skip to first unread message

ian

unread,
Jan 10, 2007, 6:26:38 PM1/10/07
to
Most tournaments in the Europe are EGF rated. Is it possible to start
using GoR as a tiebreaker. Often when number of wins are equal we try
to see who played the best with SOS, SOSOS, SODOS or others.
Is it possible to use 'tournament performance GoR' as the tiebreaker.
This being the rating at which they would have entered at to experience
no rating change.

For 100% wins, this is useless of course.

Robert Jasiek

unread,
Jan 11, 2007, 3:26:45 AM1/11/07
to
On 10 Jan 2007 15:26:38 -0800, "ian" <siva...@gmail.com> wrote:
>Most tournaments in the Europe are EGF rated. Is it possible to start
>using GoR as a tiebreaker.

In a tournament where the only connection to the EGF is a possible
result report you can essentially do whatever you want. You might use
birthday, ascii ordering of names, etc. as tiebreaker. Not that that
would make any sense, but it is possible. The question is rather why
you would want to use GoR, i.e. why you think it might make sense.

>Is it possible to use 'tournament performance GoR' as the tiebreaker.
>This being the rating at which they would have entered at to experience
>no rating change.

I am not sure that I understand what you mean. Do you want to use as
tiebreaker each player's rating that he had just before the start of
the tournament?

Like for every tiebreaker, the major question of quality is: How does
its quality compare to the quality of using no tiebreaker at all?

If you view upon a tournament as a single event during that each
player shows a performance, then GoR is counter-productive because it
pretends to say something about that it says nothing (namely about
each player's performance during the tournament). From this view,
using no tiebreaker at all is better than using GoR. To give GoR a
positive meaning in principle, it is necessary to view upon a
tournament as the last event of a series of events during that the
players have shown some performance. Then the quality of GoR as a
tiebreaker depends on the quality of GoR as a rating. Opinions differ
extremely on what the quality is. However, regardless of this dilemma,
one can say something basic: The significance of GoR (however one
measures it; significance refers to confidence error) is pretty rough;
much rougher than you would need for a meaningful tiebreak order. This
is also so for commonly used tiebreakers and I have said that it is so
even for SOS. Therefore I always recommend to never use any tiebreaker
but share places (or, if this is not possible because flight tickets
or seeding places cannot be shared, play more rounds until the
relevant tie is broken by number of wins).

If you disagree because you are able to assess significance and prove
why it is small enough to be meaningful, then do so for GoR, SOS, and
using no tiebreaker, and compare the significances. Since that would
be revolutionary research, please publish that here. I am curious to
learn more about it;)

It is easy to compare the relative quality of SOS with SOSOS, SODOS
and conclude that, if one uses one of these tiebreakers at all, then
it should be SOS. Comparing SOS to ROS or IROS is already tougher.
Comparing SOS to GoR is yet much tougher because the considered
datasets differ fundamentally. It is not easy to formulate even in
principle how exactly one should measure the two for comparing them at
all usefully. All you are left with is common sense. On that level of
argumentation, one random variable is as good or bad as another, but
the question remains why use a random variable at all when one can
simply not use any tiebreaker. IMO, the major purpose of tiebreakers
is to deceive people that a precise result ordering would have been
meaningfully possible for a tournament.

Instead of using more and more different tiebreakers, it would be much
more worthwhile to improve tournament organization to ensure a timely
process and thereby gain time for extra tiebreak playoff rounds, if
needed and possibly using reduced thinking times. This is what we do
in the German Championship Finals (if the direct comparison does not
provide enough meaningful and unambiguous information).

--
robert jasiek

gaga

unread,
Jan 11, 2007, 4:32:59 AM1/11/07
to
But that is what people want, ay? There is this prevalent delusion that
a result obtained by a formalized algorithm of some kind must be logical
and reasonable. To see that results obtained in such formal way are in
fact random often requires quite deep understanding of procedures -
something that majority is not to be bothered with.
Alas this should not stop us searching for a good algorithm. From you
post I understand that there is no resolution in sight?


> Instead of using more and more different tiebreakers, it would be much
> more worthwhile to improve tournament organization to ensure a timely
> process and thereby gain time for extra tiebreak playoff rounds, if
> needed and possibly using reduced thinking times. This is what we do
> in the German Championship Finals (if the direct comparison does not
> provide enough meaningful and unambiguous information).
>

I think it is reasonable approach.

//

Robert Jasiek

unread,
Jan 11, 2007, 5:26:42 AM1/11/07
to
On Thu, 11 Jan 2007 10:32:59 +0100, gaga <ga...@gaga.se> wrote:
>But that is what people want, ay?

Tiebreakers matter for the top places. For the weaker players (in,
e.g., a MacMahon tournament), fine sorting is rather meaningless.
However, typically the opinion of all the many weaker players is
included. What should be considered is the opinion of the top players
only because for them it really matters how they are sorted (their
sorting can affect prizes, titles, seedings).

AFAIK, there is no reliable survey on the top players' opinions about
whether to use tiebreakers (or which). I have talked to some and all I
can say is that opinions differ: Some think that tiebreakers should
not be used, some think that tiebreakers should be used. If
tiebreakers are to be used, then SOS and direct comparison are the
favourites (it is unclear which gets more votes). Refinements of
either (like SOS-2 or a more precise specification of which direct
comparison is actually used) are still mostly not known, regardless of
whether they might be better than their ancestors.

The following is needed:
- better education about which tiebreakers exist
- better research about the degree of quality of tiebreakers
- reliable surveys (not fake statistics) among the top players (best
after better education will have been done)

--
robert jasiek

ian

unread,
Jan 11, 2007, 1:06:56 PM1/11/07
to

On Jan 11, 8:26 am, Robert Jasiek <jas...@snafu.de> wrote:


> On 10 Jan 2007 15:26:38 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >Most tournaments in the Europe are EGF rated. Is it possible to start

> >using GoR as a tiebreaker.In a tournament where the only connection to the EGF is a possible


> result report you can essentially do whatever you want. You might use
> birthday, ascii ordering of names, etc. as tiebreaker. Not that that
> would make any sense, but it is possible. The question is rather why
> you would want to use GoR, i.e. why you think it might make sense.

The possibility of winning a tournament is already heavily influenced
by the McMahon bar, to be above the bar you need to demonstrate a
Rating or a Rank that meets certain criteria. EGF ratings have a
certain degree of credibility, at least as much as SOS I imagine.

>
> >Is it possible to use 'tournament performance GoR' as the tiebreaker.
> >This being the rating at which they would have entered at to experience

> >no rating change.I am not sure that I understand what you mean. Do you want to use as


> tiebreaker each player's rating that he had just before the start of
> the tournament?

No.

Each player enter's the tournament with a Rating, R(i)
at the end their rating will change by dR(i).

The tournament performance rating, T(i) is the rating for which player
i enters at to acheive dR(i)=0

>
> Like for every tiebreaker, the major question of quality is: How does
> its quality compare to the quality of using no tiebreaker at all?
>

I agree. What I am asking is if tournament performance is of high
enough quality to use.

> If you view upon a tournament as a single event during that each
> player shows a performance, then GoR is counter-productive because it
> pretends to say something about that it says nothing (namely about
> each player's performance during the tournament). From this view,
> using no tiebreaker at all is better than using GoR. To give GoR a
> positive meaning in principle, it is necessary to view upon a
> tournament as the last event of a series of events during that the
> players have shown some performance. Then the quality of GoR as a
> tiebreaker depends on the quality of GoR as a rating. Opinions differ
> extremely on what the quality is. However, regardless of this dilemma,
> one can say something basic: The significance of GoR (however one
> measures it; significance refers to confidence error) is pretty rough;
> much rougher than you would need for a meaningful tiebreak order. This
> is also so for commonly used tiebreakers and I have said that it is so
> even for SOS. Therefore I always recommend to never use any tiebreaker
> but share places (or, if this is not possible because flight tickets
> or seeding places cannot be shared, play more rounds until the
> relevant tie is broken by number of wins).
>
> If you disagree because you are able to assess significance and prove
> why it is small enough to be meaningful, then do so for GoR, SOS, and
> using no tiebreaker, and compare the significances. Since that would
> be revolutionary research, please publish that here. I am curious to
> learn more about it;)

I would offer the London Open 2006 as one instance where Tournament
performance GoR gives a better result.

Robert Jasiek

unread,
Jan 11, 2007, 2:36:30 PM1/11/07
to
On 11 Jan 2007 10:06:56 -0800, "ian" <siva...@gmail.com> wrote:
>The possibility of winning a tournament is already heavily influenced
>by the McMahon bar

Not really because in almost all tournaments all players with
realistic chances to win the tournament or get one of the top prize
places are included.

>EGF ratings have a
>certain degree of credibility, at least as much as SOS I imagine.

This is not so important here. Credibility of ratings have to be
presumed to use any rating as a tiebreaker. So for the sake of
(simplifying) discussion, we can assume the ratings to be credible.
However, another aspect of the ratings is important here: their
significance. One might wish to require for credibility a very good
significance, however, currently this is far from realistic.

>Each player enter's the tournament with a Rating, R(i)
>at the end their rating will change by dR(i).
>
>The tournament performance rating, T(i) is the rating for which player
>i enters at to acheive dR(i)=0

Ok, you define it like this. How can ever determine T(i) if the player
has dR(i)<>0 during the tournament? Besides the various assumptions
that had been made for the rating system, you have to make further
(arbitrary) assumptions that allow you determine some T(i). As long as
you don't tell us about those, we cannot determine a quality of your
tiebreaker. Even if you tell us, it will not be easy.

>What I am asking is if tournament performance is of high
>enough quality to use.

For R->oo, where R is the number of rounds, maybe :) Otherwise see
above for your missing specification of T(i) and my previous answer
(i.e. the quality of the ratings' inherent significance has to be
studied, too).

>I would offer the London Open 2006 as one instance where Tournament
>performance GoR gives a better result.

Please provide data, theory of T(i), analysis of rating significance,
analysis of T(i) significance, combination of both (theory and
practice), theory of significance of SOS (or alternatively no
tiebreakers used), significance of SOS for comparison in practice
applied to the tournament, comparison of both methods for the
tournament. Before it is far too early to speak of "better" results
because the measure of assessing quality is still missing.

--
robert jasiek

ian

unread,
Jan 11, 2007, 2:57:53 PM1/11/07
to

On Jan 11, 7:36 pm, Robert Jasiek <jas...@snafu.de> wrote:


> On 11 Jan 2007 10:06:56 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >The possibility of winning a tournament is already heavily influenced
> >by the McMahon bar

>Not really because in almost all tournaments all players with
> realistic chances to win the tournament or get one of the top prize
> places are included.

No, I can show you UK tournaments where this has not been the case.

>
> >EGF ratings have a
> >certain degree of credibility, at least as much as SOS I imagine.This is not so important here. Credibility of ratings have to be


> presumed to use any rating as a tiebreaker. So for the sake of
> (simplifying) discussion, we can assume the ratings to be credible.
> However, another aspect of the ratings is important here: their
> significance. One might wish to require for credibility a very good
> significance, however, currently this is far from realistic.

You have lost me there.

>
> >Each player enter's the tournament with a Rating, R(i)
> >at the end their rating will change by dR(i).
>
> >The tournament performance rating, T(i) is the rating for which player
> >i enters at to acheive dR(i)=0

Ok, you define it like this. How can ever determine T(i) if the player
> has dR(i)<>0 during the tournament?

You've lost me there. When you apply the GoR rating algorithm you
obtain a rating change. That is dR(i). This is what Ales Cieply
calculates.

It is trivial to find what the tournament performance rating T(i) is.
Simply increment R(i) by dR(i) and repeat calculaitons until you find
dR(i) is 0. At this point we have the rating at which you have
performed.

Yes many assumptions are made for GoR. Yes it is not perfect. SOS is
not perfect either.

Besides the various assumptions
> that had been made for the rating system, you have to make further
> (arbitrary) assumptions that allow you determine some T(i). As long as
> you don't tell us about those, we cannot determine a quality of your
> tiebreaker. Even if you tell us, it will not be easy.
>
> >What I am asking is if tournament performance is of high

> >enough quality to use.For R->oo, where R is the number of rounds, maybe :) Otherwise see


> above for your missing specification of T(i) and my previous answer
> (i.e. the quality of the ratings' inherent significance has to be
> studied, too).
>
> >I would offer the London Open 2006 as one instance where Tournament
> >performance GoR gives a better result.

>Please provide data,

Ondrej Silt won the London Open on tiebreak, Ben He came third. Ben
He's 1st round opponent had to drop out of the tournament, losing Ben
He a lot of SOS points.


>theory of T(i),
see above

> analysis of rating significance,
Ales has this

> analysis of T(i) significance, combination of both (theory and
> practice), theory of significance of SOS (or alternatively no
> tiebreakers used), significance of SOS for comparison in practice
> applied to the tournament, comparison of both methods for the
> tournament. Before it is far too early to speak of "better" results
> because the measure of assessing quality is still missing.
>

T(i) is as significant as R(i).

Robert Jasiek

unread,
Jan 11, 2007, 4:26:53 PM1/11/07
to
On 11 Jan 2007 11:57:53 -0800, "ian" <siva...@gmail.com> wrote:
>No, I can show you UK tournaments where this has not been the case.

Why did it happen? Was the bar set improperly?

>> >EGF ratings have a
>> >certain degree of credibility, at least as much as SOS I imagine.This is not so important here. Credibility of ratings have to be
>> presumed to use any rating as a tiebreaker. So for the sake of
>> (simplifying) discussion, we can assume the ratings to be credible.
>> However, another aspect of the ratings is important here: their
>> significance. One might wish to require for credibility a very good
>> significance, however, currently this is far from realistic.
>
>You have lost me there.

You have spoken about rating credibility. The term is not defined, it
is more a common sense thing. OTOH, with some care it could be
defined. Ratings have credibility if the rating numbers roughly do
what we expect and want them to do, if the players are ordered so that
we find their perceived strengths in relation to their rating numbers
more often than not, if ratings increase due to more wins, etc.

Significance is a number's meaningful accuracy. Classical example: You
are at a town T's wall, look to the neighbour town U, estimate the
distance as 10km. Then you take out your ruler to determine the wall's
width: 27.6 cm. You conclude that the total distance between towns T
and U is 10.000.276 mm. Why is this nonsense? Because your ruler's
significance is +-1mm while your view's estimate has a significance
of, say, +-3km.

EGF ratings are given as 4 digits. But this does not mean that their
significance would be +-1 (it is only their false, pretended
significance). We do not know the significance exactly. Some estimate
it as +-5, others as +-70.

>> >Each player enter's the tournament with a Rating, R(i)
>> >at the end their rating will change by dR(i).
>>
>> >The tournament performance rating, T(i) is the rating for which player
>> >i enters at to acheive dR(i)=0
>
>Ok, you define it like this. How can ever determine T(i) if the player
>> has dR(i)<>0 during the tournament?
>
>You've lost me there.

>When you apply the GoR rating algorithm you
>obtain a rating change. That is dR(i). This is what Ales Cieply
>calculates.

Ok.

>It is trivial to find what the tournament performance rating T(i) is.
>Simply increment R(i) by dR(i)

Up to here I get it.

>and repeat calculaitons until you find
>dR(i) is 0. At this point we have the rating at which you have
>performed.

Here I understand nothing.

>Yes many assumptions are made for GoR. Yes it is not perfect. SOS is
>not perfect either.

We know this, but it means we have to work hard to understand either's
quality.

>Ondrej Silt won the London Open on tiebreak, Ben He came third. Ben
>He's 1st round opponent had to drop out of the tournament, losing Ben
>He a lot of SOS points.

Before we study exceptional cases of tournaments where relevant
players dropped some rounds, we should first try to understand the
basics: tournaments where are all relevant players play all rounds.

>T(i) is as significant as R(i).

I do not understand what T(i) is because you have not expressed it in
a clear way yet. However, I guess that either variable has a greater
significance because it relies on more games than the other (and we
have assumed that the rating system makes some sense in principle, so
more games provide more information rather than less).

--
robert jasiek

ian

unread,
Jan 11, 2007, 4:48:23 PM1/11/07
to

On Jan 11, 9:26 pm, Robert Jasiek <jas...@snafu.de> wrote:
> On 11 Jan 2007 11:57:53 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >No, I can show you UK tournaments where this has not been the case.Why did it happen? Was the bar set improperly?


>
> >> >EGF ratings have a
> >> >certain degree of credibility, at least as much as SOS I imagine.This is not so important here. Credibility of ratings have to be
> >> presumed to use any rating as a tiebreaker. So for the sake of
> >> (simplifying) discussion, we can assume the ratings to be credible.
> >> However, another aspect of the ratings is important here: their
> >> significance. One might wish to require for credibility a very good
> >> significance, however, currently this is far from realistic.
>

> >You have lost me there.You have spoken about rating credibility. The term is not defined, it

> >Simply increment R(i) by dR(i)Up to here I get it.


>
> >and repeat calculaitons until you find
> >dR(i) is 0. At this point we have the rating at which you have

> >performed.Here I understand nothing.


>
> >Yes many assumptions are made for GoR. Yes it is not perfect. SOS is

> >not perfect either.We know this, but it means we have to work hard to understand either's


> quality.
>
> >Ondrej Silt won the London Open on tiebreak, Ben He came third. Ben
> >He's 1st round opponent had to drop out of the tournament, losing Ben

> >He a lot of SOS points.Before we study exceptional cases of tournaments where relevant


> players dropped some rounds, we should first try to understand the
> basics: tournaments where are all relevant players play all rounds.
>

> >T(i) is as significant as R(i).I do not understand what T(i) is because you have not expressed it in
> a clear way yet.

Yes I have.

Mef

unread,
Jan 11, 2007, 5:01:36 PM1/11/07
to

I'm no tournament or rating theorist so feel free to take my comments
with a grain of salt.

Isn't generally the idea of crowning a tournament winner to find out
who played the best in that tournament? As such shouldn't the results
of the tournament be derivable from data obtained only through the
tournament itself? I can understand using previous data for things like
seeding going into a tournament, but as far as crowning a
champion/breaking ties, the goal is to decide who is the better player
on that day, no? If you begin using other tournament results to decide
the final placement of players then you are no longer running a
tournament, you are running a league (which there is nothing wrong with
a league, just that you should call it such). Just my 2 cents.


Cheers,

Mef

ian

unread,
Jan 11, 2007, 5:19:52 PM1/11/07
to

On Jan 11, 10:01 pm, "Mef" <mwille...@gmail.com> wrote:
> ian wrote:
> > Most tournaments in the Europe are EGF rated. Is it possible to start
> > using GoR as a tiebreaker. Often when number of wins are equal we try
> > to see who played the best with SOS, SOSOS, SODOS or others.
> > Is it possible to use 'tournament performance GoR' as the tiebreaker.
> > This being the rating at which they would have entered at to experience
> > no rating change.
>

> > For 100% wins, this is useless of course.I'm no tournament or rating theorist so feel free to take my comments


> with a grain of salt.
>
> Isn't generally the idea of crowning a tournament winner to find out
> who played the best in that tournament? As such shouldn't the results
> of the tournament be derivable from data obtained only through the
> tournament itself? I can understand using previous data for things like
> seeding going into a tournament, but as far as crowning a
> champion/breaking ties, the goal is to decide who is the better player
> on that day, no? If you begin using other tournament results to decide
> the final placement of players then you are no longer running a
> tournament, you are running a league (which there is nothing wrong with
> a league, just that you should call it such). Just my 2 cents.
>

There is much truth to this, but many events demand a winner. The
European Go Congress is one such event. This year it was one by 1 point
on the secondary tiebreaker. How good SOSOS is as a measure of
performance has not really been established.

The idea behind this tiebreaker is that European players have
reasonable reliable ratings. Of course one can argue that ratings are
not reliable. If they were though, it seems to me that tournament
performance rating is an obvious measure of performance.

Mef

unread,
Jan 11, 2007, 6:04:03 PM1/11/07
to

I completely understand the necessity to break the tie and crown a
winner (or runner-up, etc), I just feel that in general the final
results of a tournament should be based solely on what has happened in
that tournament. I compeletely agree that SOSOS is not a great
indicator of performance in a tournament, however, if you are going
down to a secondary tiebreaker you have already proven that their
performances were both pretty equal, and only even something marginal
will break the tie (i.e. I imagine SOSOS has some significance when you
take it as a given that there are equal SOS*).

The other issue I would have against using EGF ratings as a tiebreak is
that the EGF rating is an average of how they play over time, and of
course no player wants the tournament day to be an average day for
playing (it would also have a slight bias toward players who have been
strong for a longer period of time, as opposed to improving/up and
coming players). I like the idea of a tournament because it's a chance
to exceed your rank/overcome your seed, using the rank as a tiebreak
makes having the lower rank doubly painful.

I guess I just feel that SOSOS or SOSODOS or SOSOSOSOSOSOS or whatever
else your sinking ship feels the need to broadcast has more validity on
how the game was played that day than the EGF rankings going into it*.
Though it doe make me glad that I'm not a tournament organizer who has
to make the tough decisions like this, and will have to tell someone he
missed out on the prize money because one of his opponents played
someone who happened to be in the same pool as someone who may have
lost all his games due to food poisoning or what not...

Cheers,
Mef

* As stated previously I am not an expert in these matters, I have no
mathematical basis for any of these claims. These statements have not
been evaluated by the Food and Drug Administration. This product is not
intended to diagnose, treat, cure or prevent any disease.

Robert Jasiek

unread,
Jan 12, 2007, 3:00:44 AM1/12/07
to
On 11 Jan 2007 13:48:23 -0800, "ian" <siva...@gmail.com> wrote:
>Yes I have.

Your attempts to explain tournament performance rating have been:

>>>>The tournament performance rating, T(i) is the rating for which player
>>>>i enters at to acheive dR(i)=0

>>When you apply the GoR rating algorithm you


>>obtain a rating change. That is dR(i). This is what Ales Cieply
>>calculates.

>>and repeat calculaitons until you find
>>dR(i) is 0. At this point we have the rating at which you have
>>performed.

Your two descriptions differ fundamentally: First you say that
tournament performance rating is a value that a player has at the
moment of entering a tournament. Then you say that tournament
performance rating is something that is calculated using dR(i), i.e.
at the end of a tournament. As long as you contradict yourself in the
descriptions, I have no chance to understand what you really mean.

Then I do not understand what you mean by "repeat calculations". Which
calculations? How to repeat them? Then I do not understand "until you
find dR(i) is 0". How can dR(i) become 0 if a player's dR(i) has
changed during a tournament (what would be the by far most frequent
case)?

--
robert jasiek

Robert Jasiek

unread,
Jan 12, 2007, 3:09:49 AM1/12/07
to
On 11 Jan 2007 14:19:52 -0800, "ian" <siva...@gmail.com> wrote:
>There is much truth to this, but many events demand a winner.

Not many do demand a winner. E.g., any championship can share the
title if it is tied. Titles can be shared.

Trophies cannot be shared but there is no need to use any. Prizes can
consist of divisble things like money.

What cannot be shared is flight tickets or seeding places.

>The European Go Congress is one such event.

There is no need to split a tied title. It is currently done but it is
NOT necessary.

>This year it was one by 1 point
>on the secondary tiebreaker.

Several times it has been such a close lottery.

>The idea behind this tiebreaker is that European players have
>reasonable reliable ratings. Of course one can argue that ratings are
>not reliable. If they were though, it seems to me that tournament
>performance rating is an obvious measure of performance.

What matters for ratings being used as a tiebreaker is the ratings'
"significance". The aspects "credibility" or "reliability" are not
sufficient for such a usage.

--
robert jasiek

Robert Jasiek

unread,
Jan 12, 2007, 3:16:11 AM1/12/07
to
On 11 Jan 2007 15:04:03 -0800, "Mef" <mwil...@gmail.com> wrote:
>I completely understand the necessity to break the tie and crown a
>winner

There is no such necessiety. Titles can be shared because it is
possible for the tournament rules to say that a title is shared among
those being tied at the top place at the end of the tournament.

>a tournament organizer who has
>to make the tough decisions like this

A tournament organizer should never be in such a position. The result
interpretation rules must be fixed until the start of a tournament.

>, and will have to tell someone he
>missed out on the prize money because one of his opponents played
>someone who happened to be in the same pool as someone who may have
>lost all his games due to food poisoning or what not...

Sure, this is one of the organizers' past times:)

--
robert jasiek

JP

unread,
Jan 12, 2007, 4:52:57 AM1/12/07
to
Dear Robert,

Ian's 'tournament performance rating' is a player's _hypothetical_ entry
GoR that would stay unchanged when GoRs are updated after the
tournament, based on the actual game results and the actual entry GoRs
of the opponents.

I am sure you see that, assuming a player does not win (or lose) every
game, this performance rating is uniquely determined by the game results
and the entry GoRs of the opponents. There are obviously many ways to
calculate it.

Cheers,

Juho P

Robert Jasiek

unread,
Jan 12, 2007, 5:28:24 AM1/12/07
to
On Fri, 12 Jan 2007 11:52:57 +0200, JP <juho.p...@helsinki.fi>
wrote:

>Ian's 'tournament performance rating' is a player's _hypothetical_ entry
>GoR that would stay unchanged when GoRs are updated after the
>tournament, based on the actual game results and the actual entry GoRs
>of the opponents.

I see. Hopefully Ian has meant this. However, since this is new to me,
I wonder whether and why in general always a player's hypothetical
entry GoR exists so that it does stay unchanged.

>I am sure you see that, assuming a player does not win (or lose) every
>game, this performance rating is uniquely determined by the game results
>and the entry GoRs of the opponents.

I do not see it. If the proof has been done, where is it?

>There are obviously many ways to calculate it.

If they are so obvious, then what is an example way of calculation in
general? It is not obvious for me and I do not want to do the proof
myself just to understand what Ian and you are claiming. (Once one way
is known, then obviously there are many ways - but I do not even know
one way.)

--
robert jasiek

ian

unread,
Jan 12, 2007, 1:26:08 PM1/12/07
to

On Jan 12, 10:28 am, Robert Jasiek <jas...@snafu.de> wrote:
> On Fri, 12 Jan 2007 11:52:57 +0200, JP <juho.penna...@helsinki.fi>


> wrote:
>
> >Ian's 'tournament performance rating' is a player's _hypothetical_ entry
> >GoR that would stay unchanged when GoRs are updated after the
> >tournament, based on the actual game results and the actual entry GoRs

> >of the opponents.I see. Hopefully Ian has meant this. However, since this is new to me,


> I wonder whether and why in general always a player's hypothetical
> entry GoR exists so that it does stay unchanged.
>
> >I am sure you see that, assuming a player does not win (or lose) every
> >game, this performance rating is uniquely determined by the game results

> >and the entry GoRs of the opponents.I do not see it. If the proof has been done, where is it?
>
> >There are obviously many ways to calculate it.If they are so obvious, then what is an example way of calculation in


> general? It is not obvious for me and I do not want to do the proof
> myself just to understand what Ian and you are claiming. (Once one way
> is known, then obviously there are many ways - but I do not even know
> one way.)
>

Ok I will assume you are being serious then.

Let's say I enter a tournament at 1400
I have the following record 1500 - , 1450+ 1425+

this yields a rating change of around 44 points. (Ales algorithm may
have a few finesse's I am unaware of, but this is what I get)

So R(i) = 1400 and dR(i) = 44
my program then pretends the entry grade was
R(i+1) =R(i)+dR
IT uses this value to recalculate dR. The opponents grades do not
change, nor do the results of the games.
a new dR is found :: dR(i), it is less than dR(i)

The process continues until dR(i+n) is less than 0.5
at this point the R(i+n) becomes the T(i), which my program claims to
be 1550

i cannot express it any more clearly.
T(i) is clearly independent of entry grade in this instance.

Most bars are set on the basis of ranks. However ranks do not always
correspond to ratings. I noted one tournmant in the UK. The bar was at
3d. A 3d won the tournament with 2/3. A 2d beat this 3d to acheive 3/3.
It seems questionable to me that the 3d deserved to win because of his
SOS. If more significance had been given to ratings then this might not
have occurred. Then again, it might have occurred. I am not suggesting
Tournament Performance is a perfect tiebreaker, however it is an easily
understandable one.

ian

unread,
Jan 12, 2007, 1:28:03 PM1/12/07
to

On Jan 12, 8:09 am, Robert Jasiek <jas...@snafu.de> wrote:
> On 11 Jan 2007 14:19:52 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >There is much truth to this, but many events demand a winner.Not many do demand a winner. E.g., any championship can share the


> title if it is tied. Titles can be shared.
>
> Trophies cannot be shared but there is no need to use any. Prizes can
> consist of divisble things like money.
>
> What cannot be shared is flight tickets or seeding places.
>

> >The European Go Congress is one such event.There is no need to split a tied title. It is currently done but it is


> NOT necessary.
>
> >This year it was one by 1 point

> >on the secondary tiebreaker.Several times it has been such a close lottery.


>
> >The idea behind this tiebreaker is that European players have
> >reasonable reliable ratings. Of course one can argue that ratings are
> >not reliable. If they were though, it seems to me that tournament

> >performance rating is an obvious measure of performance.What matters for ratings being used as a tiebreaker is the ratings'


> "significance". The aspects "credibility" or "reliability" are not
> sufficient for such a usage.
>

Has anyone ever shown SOS or SOSOS to be significant? If not your
statement is suspect.

Robert Jasiek

unread,
Jan 12, 2007, 1:51:51 PM1/12/07
to
On 12 Jan 2007 10:28:03 -0800, "ian" <siva...@gmail.com> wrote:
>Has anyone ever shown SOS or SOSOS to be significant?

I made some attempts here to show that SOS has a very low
significance:

http://groups.google.de/group/rec.games.go/msg/ac4ab38d218fafdf?dmode=source&hl=de


--
robert jasiek

Robert Jasiek

unread,
Jan 12, 2007, 1:58:58 PM1/12/07
to
On 12 Jan 2007 10:26:08 -0800, "ian" <siva...@gmail.com> wrote:
>Let's say I enter a tournament at 1400
>I have the following record 1500 - , 1450+ 1425+
>
>this yields a rating change of around 44 points. (Ales algorithm may
>have a few finesse's I am unaware of, but this is what I get)
>
>So R(i) = 1400 and dR(i) = 44
>my program then pretends the entry grade was
>R(i+1) =R(i)+dR
>IT uses this value to recalculate dR. The opponents grades do not
>change, nor do the results of the games.
>a new dR is found :: dR(i),

dR(i+1), I guess you mean

>it is less than dR(i)
>
>The process continues until dR(i+n) is less than 0.5
>at this point the R(i+n) becomes the T(i), which my program claims to
>be 1550
>
>i cannot express it any more clearly.

Ok, this description is reasonably good. - You do not need to prove
existence because you use a numerical approach, I see.

***

Why do you want to use T(i) as a tiebreaker instead of R(i) or of
R(i)+dR(i) or of dR(i)?

>I noted one tournmant in the UK. The bar was at
>3d. A 3d won the tournament with 2/3. A 2d beat this 3d to acheive 3/3.
>It seems questionable to me that the 3d deserved to win because of his
>SOS. If more significance had been given to ratings then this might not
>have occurred.

How many players were above the bar, how many 2d's participated? Maybe
the problem was related to setting the bar doubtfully rather than to
which tiebreaker was used?

--
robert jasiek

ian

unread,
Jan 12, 2007, 3:05:27 PM1/12/07
to

On Jan 12, 6:58 pm, Robert Jasiek <jas...@snafu.de> wrote:


> On 12 Jan 2007 10:26:08 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >Let's say I enter a tournament at 1400
> >I have the following record 1500 - , 1450+ 1425+
>
> >this yields a rating change of around 44 points. (Ales algorithm may
> >have a few finesse's I am unaware of, but this is what I get)
>
> >So R(i) = 1400 and dR(i) = 44
> >my program then pretends the entry grade was
> >R(i+1) =R(i)+dR
> >IT uses this value to recalculate dR. The opponents grades do not
> >change, nor do the results of the games.

> >a new dR is found :: dR(i),dR(i+1), I guess you mean


>
> >it is less than dR(i)
>
> >The process continues until dR(i+n) is less than 0.5
> >at this point the R(i+n) becomes the T(i), which my program claims to
> >be 1550
>

> >i cannot express it any more clearly.Ok, this description is reasonably good. - You do not need to prove


> existence because you use a numerical approach, I see.
>
> ***
>
> Why do you want to use T(i) as a tiebreaker instead of R(i) or of
> R(i)+dR(i) or of dR(i)?
>

R(i) , the rating a player entered at? That would be pretty unfair,
strongest rating wins all tiebreaks? Are you serious?

dR(i) means most undergraded wins.

actually on that note, it is possible we should use opponents ratings
as R(i) + 0.5 dR(i) to compensate for undergraded punks.

> >I noted one tournmant in the UK. The bar was at
> >3d. A 3d won the tournament with 2/3. A 2d beat this 3d to acheive 3/3.
> >It seems questionable to me that the 3d deserved to win because of his
> >SOS. If more significance had been given to ratings then this might not

> >have occurred.How many players were above the bar, how many 2d's participated? Maybe


> the problem was related to setting the bar doubtfully rather than to
> which tiebreaker was used?

The bar was set doubtfully owing to using ranks.

ian

unread,
Jan 12, 2007, 3:19:35 PM1/12/07
to

On Jan 12, 6:51 pm, Robert Jasiek <jas...@snafu.de> wrote:
> On 12 Jan 2007 10:28:03 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >Has anyone ever shown SOS or SOSOS to be significant?I made some attempts here to show that SOS has a very low
> significance:
>
> http://groups.google.de/group/rec.games.go/msg/ac4ab38d218fafdf?dmode...
>
I don't understand this argument at all, it lacks any proofs

Robert Jasiek

unread,
Jan 12, 2007, 3:29:50 PM1/12/07
to
On 12 Jan 2007 12:05:27 -0800, "ian" <siva...@gmail.com> wrote:
>R(i) , the rating a player entered at? That would be pretty unfair,
>strongest rating wins all tiebreaks?

Some proposed this, maybe because it reflects earlier achievements.
The tournament's achievements are already considered in the first
result criterion, so why not consider what is not considered yet? (You
know what I think about ratings in principle, but you make it yourself
a bit easy to call the proposal unfair without good reason.)

>dR(i) means most undergraded wins.

As might the first result criterion:)

--
robert jasiek

ian

unread,
Jan 12, 2007, 3:54:38 PM1/12/07
to

On Jan 12, 8:29 pm, Robert Jasiek <jas...@snafu.de> wrote:


> On 12 Jan 2007 12:05:27 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >R(i) , the rating a player entered at? That would be pretty unfair,

> >strongest rating wins all tiebreaks?Some proposed this, maybe because it reflects earlier achievements.


> The tournament's achievements are already considered in the first
> result criterion, so why not consider what is not considered yet? (You
> know what I think about ratings in principle, but you make it yourself
> a bit easy to call the proposal unfair without good reason.)
>

I struggle to believe somebody seriously proposed that. A tournaments
result, as far as possible, should reflect only performance during the
event. As most tournaments are McMahon, this is generally not 100% the
case.

> >dR(i) means most undergraded wins.As might the first result criterion:)
>

Again for the above reason, this would be unfair.

henricb...@gmail.com

unread,
Jan 12, 2007, 8:26:51 PM1/12/07
to

Mef skrev:

> Isn't generally the idea of crowning a tournament winner to find out
> who played the best in that tournament? As such shouldn't the results
> of the tournament be derivable from data obtained only through the
> tournament itself?

Exactly. Ians suggestion amounts to awarding the championship or
whatever to the player whose opponents had performed better in earlier
tournaments. It's of course not quite the same as staying in bed and
not bothering to play at all, just awarding the championship to the
player who was highest rated to begin with, but it goes some way in
that direction. SOS at least is based only on the results from the
current tournament. I think SOS is also much more transparent, the
players can calculate their SOS there on the spot, rather than waiting
for some elaborate numerical tricks, that's a great advantage.

Obviously there is often an element of chance in things like SOS and
SOSOS, but I don't think there is any need to worry too much about
that, as long as the rules are fair and clear to the participants at
the outset. In what part of life does chance not play any role at all?

The MM system favors players who have performed better in previous
tournaments of course, but that's a compromise in order to be able to
crown a winner in few rounds and avoid too uneven games. The top groups
should be sufficiently large to include everybody with a reasonable
chance to win the tournament.

best regards,
henric

ian

unread,
Jan 12, 2007, 8:35:39 PM1/12/07
to

On Jan 13, 1:26 am, henricbergsa...@gmail.com wrote:
> Mef skrev:
>
> > Isn't generally the idea of crowning a tournament winner to find out
> > who played the best in that tournament? As such shouldn't the results
> > of the tournament be derivable from data obtained only through the

> > tournament itself?Exactly. Ians suggestion amounts to awarding the championship or


> whatever to the player whose opponents had performed better in earlier
> tournaments. It's of course not quite the same as staying in bed and
> not bothering to play at all, just awarding the championship to the
> player who was highest rated to begin with, but it goes some way in
> that direction. SOS at least is based only on the results from the
> current tournament. I think SOS is also much more transparent, the
> players can calculate their SOS there on the spot, rather than waiting
> for some elaborate numerical tricks, that's a great advantage.

Perhaps, but it is possble to refine the system to include current
performances. SOS is not demonstrably fair, and SOSOS is demonstrably
more random.

Robert Jasiek

unread,
Jan 13, 2007, 2:26:31 AM1/13/07
to
On 12 Jan 2007 12:54:38 -0800, "ian" <siva...@gmail.com> wrote:
>A tournaments
>result, as far as possible, should reflect only performance during the
>event.

Since you think this (BTW, I agree), you should never use any ratings-
dependent tiebreaker.

--
robert jasiek

Robert Jasiek

unread,
Jan 13, 2007, 2:38:28 AM1/13/07
to
On 12 Jan 2007 17:26:51 -0800, henricb...@gmail.com wrote:
>SOS at least is based only on the results from the
>current tournament. I think SOS is also much more transparent, the
>players can calculate their SOS there on the spot, rather than waiting
>for some elaborate numerical tricks, that's a great advantage.

These aspects are correct but SOS is not as nice as you describe it:
http://senseis.xmp.net/?SOS%2FDiscussion

To cite some of it:
>>
For a single player, greater SOS for him than smaller SOS for him
could be interpreted as greater strength of his opponents during the
tournament. For any two players, a meaningful comparison is hardly
possible because it is unclear

* whether winning or whether losing games against opponents with
greater numbers of wins during the tournament is better,
* why SOS should be meaningful at all when almost always players
with similar SOS have such a small SOS-difference that it is smaller
than every meaningful numerical significance would be,
* why SOS should distinguish players at the end of a tournament
after the pairing program has done its very best to make SOS of every
two players with the same number of wins as close as possible,
* why winning in earlier rounds should be more important than
winning in later rounds (with early wins a player collects more SOS
since he gets to play players with more wins earlier),
* why it should be fair that one's opponents may get as many or
few wins in rounds after one has played them (so that one cannot
influence their later achievements during the tournament).

Some argue that SOS would be fair on average over many tournaments but
this is refuted by the law of great numbers. It requires an infinite
number of tournaments to allow that conclusion while no player ever
can play an infinite number of tournaments. Even worse, specific
titles are issued only once per year, tournament conditions and a
player's development change.
<<

--
robert jasiek

Barry

unread,
Jan 13, 2007, 3:31:44 AM1/13/07
to
On Sat, 13 Jan 2007 08:38:28 +0100, Robert Jasiek wrote:

> On 12 Jan 2007 17:26:51 -0800, henricb...@gmail.com wrote:
>>SOS at least is based only on the results from the
>>current tournament. I think SOS is also much more transparent, the
>>players can calculate their SOS there on the spot, rather than waiting
>>for some elaborate numerical tricks, that's a great advantage.
>
> These aspects are correct but SOS is not as nice as you describe it:
> http://senseis.xmp.net/?SOS%2FDiscussion
>
> To cite some of it:
>>>
> For a single player, greater SOS for him than smaller SOS for him
> could be interpreted as greater strength of his opponents during the
> tournament. For any two players, a meaningful comparison is hardly
> possible because it is unclear

Why unclear? A higher SOS generally means harder opponents.

>
> * whether winning or whether losing games against opponents with
> greater numbers of wins during the tournament is better,

An argument against SODOS perhaps.

> * why SOS should be meaningful at all when almost always players
> with similar SOS have such a small SOS-difference that it is smaller
> than every meaningful numerical significance would be,

this is an argument? Of course if they have similar SOS the difference is
small.

> * why SOS should distinguish players at the end of a tournament
> after the pairing program has done its very best to make SOS of every
> two players with the same number of wins as close as possible,

This assumes that SOS is used during the tournament in deciding the draw.
In many cases this does not happen, and even where it does happen SOS is
still a measure of how hard your opponents were. You could say that
distribution of SOS gives a measure of how well the pairings have been
arranged. Unfortunately pairings are never perfect.

> * why winning in earlier rounds should be more important than
> winning in later rounds (with early wins a player collects more SOS
> since he gets to play players with more wins earlier),

An argument against CUSS perhaps. In the case of SOS; early wins mean
that you are paired with harder opponents which is what SOS is supposed to
estimate.

> * why it should be fair that one's opponents may get as many or
> few wins in rounds after one has played them (so that one cannot
> influence their later achievements during the tournament).

But their performance is based on how good they are, which is what SOS is
attempting to measure.

>
> Some argue that SOS would be fair on average over many tournaments but
> this is refuted by the law of great numbers. It requires an infinite
> number of tournaments to allow that conclusion while no player ever
> can play an infinite number of tournaments. Even worse, specific
> titles are issued only once per year, tournament conditions and a
> player's development change.
>

Like Henric I think that SOS is a reasonable tie breaker to use: It is
easy to calculate and understand. It depends only on the results of the
tournament and it is never worse than tossing a coin.

There are other ways to break ties that have better credentials (eg using
the EGF algorithm with everyone starting off with the same rating) but SOS
will continue to be used because of the above reasons.

Yes: Playoffs or sharing prizes/titles may be better where practical.

Robert Jasiek

unread,
Jan 13, 2007, 5:53:35 AM1/13/07
to
On Sat, 13 Jan 2007 21:31:44 +1300, Barry <b.ph...@clear.net.nz>
wrote:
>Why unclear?

Because of the given reasons.

>A higher SOS generally means harder opponents.

We have to distinguish SOS used as pairing criterion from SOS used as
result criterion. When SOS is used as pairing criterion, then for
every next round being paired one can only estimate the expected
average of how hard one's opponents are, where "hard" refers to
tournament strength, i.e. the opponents' number of wins in that
tournament during all rounds.

When SOS is not used as pairing criterion but used as result
criterion, then after the last round one can see in restrospect how
hard a player's opponents have been. However, this does not mean that
considering this aspect of SOS would be fair because the player could
not influence his opponents' hardness in those rounds after he played
against them. Using this retrospect is like throwing a coin for each
game result of the games that his opponents played in those rounds
after he played against them. - Concerning "the rounds before", this
part of a player's SOS is earned. But it also means that a player has
to win the early rounds for getting the greater chance to then play
expected harder opponents on average. Since all players (should) know
that winning the early rounds is that important, all can attempt wins
then equally. At the same time, the luck of how hard or weak an
opponent will turn out to be is the greatest during the earliest
rounds. Thus when the opponents' hardness is still the least assessed
during the early rounds, SOS gives the most part of self-earned
tiebreak points to the player. During the early rounds, SOS behaves
like a noise collector - during the late rounds, SOS behaves like a
coin tossing collector.

When SOS is used in both functions, their effects are furthermore
related wiht each other.

To get harder opponents, it is not necessary to consider SOS as a
result criterion in retrospect, but it suffices to use the
Swiss/MacMahon tournament system: The more a player continues winning,
the harder his opponents become because players with same numbers of
wins/MMpoints are paired against each other.

>> * whether winning or whether losing games against opponents with
>> greater numbers of wins during the tournament is better,
>
>An argument against SODOS perhaps.

It is well-known that it is an argument against SODOS. So well-known
indeed that everybody tends to overlook that it is also an argument
against SOS, although there to a smaller degree.

>> * why SOS should be meaningful at all when almost always players
>> with similar SOS have such a small SOS-difference that it is smaller
>> than every meaningful numerical significance would be,
>
>this is an argument?

An important argument used consistently in statistics and physics.
When the noise overwhelms the data, then do not misinterpret any
single date as precise. Currently, SOS is interpreted with a precision
of 1 (or 1/2) SOS point while extremely likely the noise is much
greater. Tournament results are interpreted as if one measured the
distance of two atoms with a ruler.

>> * why SOS should distinguish players at the end of a tournament
>> after the pairing program has done its very best to make SOS of every
>> two players with the same number of wins as close as possible,
>
>This assumes that SOS is used during the tournament in deciding the draw.

Yes.



>In many cases this does not happen,

In Europe, it is done frequently. Setting a pairing program's
parameters once at the start of a tournament and avoid thinking about
their meanins is just too convenient.

>and even where it does happen SOS is
>still a measure of how hard your opponents were.

See above.

>You could say that
>distribution of SOS gives a measure of how well the pairings have been
>arranged.

For this purpose, SOS is pretty reasonable.

>> * why winning in earlier rounds should be more important than
>> winning in later rounds (with early wins a player collects more SOS
>> since he gets to play players with more wins earlier),
>
>An argument against CUSS perhaps.

It is an extremely relevant argument about CUSS or ROS because that is
their major aspect of definition. For SOS, it is still a very relevant
argument because, for each of your opponents, the rounds before you
play them give you SOS points independently of who is your opponent
because this is how Swiss/MacMahon tournaments work: pair players with
equal number of wins. Therefore this part of SOS (how many SOS points
you get for the rounds before you play a particular opponent) behaves
like the same part of ROS (or CUSS).

The argument applies to this part of a player's SOS.

>In the case of SOS; early wins mean
>that you are paired with harder opponents

See above.

>which is what SOS is supposed to estimate.

As a pairing criterion or as a result criterion?

>> * why it should be fair that one's opponents may get as many or
>> few wins in rounds after one has played them (so that one cannot
>> influence their later achievements during the tournament).
>
>But their performance is based on how good they are, which is what SOS is
>attempting to measure.

I agree that SOS, used as a result criterion, measures the hardness of
a player's opponents. However, I do not think that this good
information. It is bad information because of the reason that the
player himself cannot influence in the least his opponents'
performance in the rounds after he has played against them. A player
has a better tournament success if he does more for it. For getting a
better SOS-ordered place in his final number of wins/MMpoints group in
the result table, he can only pray for having better luck than his
lottery competitors in the same group.

Go is supposed to be a game of skill - not a game of luck. Therefore
as many elements of luck ought to be avoided in the result table. That
a pairings draw must be done at all and this involves luck (because
one might get an easy or a tough first round opponent or because one
opponent's style is less frightening than another opponent's style)
does not justify to introduce more and then unnecessary elements of
luck.

>Like Henric I think that SOS is a reasonable tie breaker to use: It is
>easy to calculate and understand.

Not using any tiebreaker is much easier to calculate and understand.

Then SOS has yet another disadvantage: opponents missing rounds play a
role.

>It depends only on the results of the tournament

As does not using any tiebreaker.

>and it is never worse than tossing a coin.

The essential question is: How does it compare to not using any
tiebreaker?

>SOS will continue to be used because of the above reasons.

As long as too few compare it seriously to not using any tiebreaker.

--
robert jasiek

ian

unread,
Jan 13, 2007, 6:28:37 AM1/13/07
to

On Jan 13, 7:26 am, Robert Jasiek <jas...@snafu.de> wrote:


> On 12 Jan 2007 12:54:38 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >A tournaments
> >result, as far as possible, should reflect only performance during the

> >event.Since you think this (BTW, I agree), you should never use any ratings-
> dependent tiebreaker.
>

Then we can also not use Rank, since this affects SOS. Therefore we can
only use extra games or Nigiri.

Robert Jasiek

unread,
Jan 13, 2007, 7:24:40 AM1/13/07
to
On 13 Jan 2007 03:28:37 -0800, "ian" <siva...@gmail.com> wrote:
>Then we can also not use Rank

Rank as a tiebreaker? Rank is used only for forming the MacMahon
groups. The reasoning is that ranks replace a preceding seeding Swiss
tournaments.

>Therefore we can only use extra games or Nigiri.

Nigiri as a tiebreaker is at least honest.

--
robert jasiek

ian

unread,
Jan 13, 2007, 7:42:13 AM1/13/07
to

On Jan 13, 12:24 pm, Robert Jasiek <jas...@snafu.de> wrote:
> On 13 Jan 2007 03:28:37 -0800, "ian" <sivad...@gmail.com> wrote:
>
> >Then we can also not use RankRank as a tiebreaker? Rank is used only for forming the MacMahon


> groups. The reasoning is that ranks replace a preceding seeding Swiss
> tournaments.

Rank affects SOS. Therefore as Rank was gained prior to the tournament
we cannot use SOS according to your argument.

>
> >Therefore we can only use extra games or Nigiri.Nigiri as a tiebreaker is at least honest.
>

Nigiri is a different skill, and hasn't been covered yet by the Rules
properly.

gaga

unread,
Jan 15, 2007, 4:19:48 AM1/15/07
to
In amateur tournaments as these in Europe it is simply impossible to
determine a 'perfect' score in case of ties. Either you accept them or
you need to accept additional games as I do not see other ways to
determine the end result. All other algorithms are based more or less on
magic not on performance in the event. One may try nigiri (or any other
luck-based rule) of course but how reasonable and just is that?
If one want to have results completely independent from ranks and rating
then other than MacMahon systems are to be used at least for top
players. You would have two tournaments instead of one - I guess
acceptable solution at least for some. How acceptable (doable) it is for
organizers is another matter.


br

henricb...@gmail.com

unread,
Jan 15, 2007, 8:03:51 AM1/15/07
to

gaga skrev:

> >>> Therefore we can only use extra games or Nigiri.Nigiri as a tiebreaker is at least honest.
> >
> > Nigiri is a different skill, and hasn't been covered yet by the Rules
> > properly.
> >
> In amateur tournaments as these in Europe it is simply impossible to
> determine a 'perfect' score in case of ties. Either you accept them or
> you need to accept additional games as I do not see other ways to
> determine the end result. All other algorithms are based more or less on
> magic not on performance in the event. One may try nigiri (or any other
> luck-based rule) of course but how reasonable and just is that?
> If one want to have results completely independent from ranks and rating
> then other than MacMahon systems are to be used at least for top
> players. You would have two tournaments instead of one - I guess
> acceptable solution at least for some. How acceptable (doable) it is for
> organizers is another matter.
>
>
> br

Agreed (I think). To elaborate a bit more:

We want tournaments in five or at most six rounds, to be able to hold
them in weekends, in 2-3 days. We want the players to enjoy themselves:
we don't want too disequilibrated games. Those who are strong enough
that they can be expected to be able to compete for the top places
should be allowed to do so on fair terms.

Clearly it is impossible to design a failsafe tournament system which
can always a completely rational ground for crowning the player as
winner who has made the "best" achievement.

The notion of eliminating chance completely from the process, in my
opinion is futile, not realistic and pretty naive. Clearly, whatever we
do there will always be an element of chance involved. One can think of
incidents like T Mark Hall losing the game because of a mobile phone
ringing at an inopportune moment. Or you can happen to be seated on
the side of the table which you don't like, or which has worst light
conditions, in an important game. A more fundamental problem: I think
it is permissible to make the assumption that some players win more
easily against particular opponents - i.e. that situations exist where
A wins more often against B, who wins more often against C. That's
apriori very likely, since go strength consists of many different
skills (like reading power, good judgement in opening and direction of
play, positional judgement, endgame skill, time management etc). Also
we suffer from all sorts of psychological hangups. Of course there must
then be an element of chance involved in a Swiss tournament or a knock
out tournament.

Chance is not unfair, it is fair, it doesn't discriminate between
people. A player told me last year that he suspected that Gerlach had
inserted some lines of code with his name, to provide for his being
often paired down in MM tournaments. If this were true, it would be
unfair. But we don't really believe such a thing can be true.

SOS, and SOSOS etc are easy to calculate. There is probably some
rationale for them in most situations, for SOS more than SOSOS. They
for sure involve some chance as well, in many cases. But they can not
be "worse" than throwing dice, since we are prepared to believe that
they may have more to do with the players' achievements than throwing
dice. So why not use them, if tiebreakers are required. It is a great
advantage for the organiser to be sure that a winner can really be
determined - it provides for a greater choice of prizes and awards.

Those who have any chance at all to win the tournament should as far as
possible be included in the top group, so that it is the achioevement
in the tournament and not previous achievements which counts. Depending
on the strengths of the participants it is sometimes necessary to
compromise on that point in MM tournaments, that's too bad but one can
always try to do things as well as possible.

To make sure that people are reasonably ranked/rated is a separate
issue, which is not trivial. The point where I'm most suspicious myself
regards possible regional discrepancies in ranks, both in dan ranks and
in EGF GoR. If people from some countries/regions are relatively under
rated / under ranked, they are not assigned fairly to MM groups in MM
tournaments. I don't in fact believe that regional discrepancies are
very large, but it would be reassuring to see them mapped and measured
properly. As far as GoR is concerned, I would expect that the easiest
way to map regional discrepancies is to do statistics on GoR increments
in games where people from different places play each other. The
European Go Database ( http://lnx.agi.go.it/EGD/EGD_index.php ) has
good tools to look at winning percentages, for instance in encounters
between different countries or different clubs, but this is not a good
measure of regional discrepancies, whereas statistics on GoR-increments
probably would be.

best regards,
Henric

gaga

unread,
Jan 15, 2007, 8:48:40 AM1/15/07
to
If only chance is taken into consideration then we can drop a dice and
get the result this way. How to ensure that only performance (however
variable) can be appreciated in some way is a problem that any chance
rule does not solve. Adding complications and some binds to reality and
performance (or at least to the way the player lived through the
tournament) does not change much - I would not bother with use of
complicated tools if only thing they do is obscure the fact that the
results (due to tie-breakers) are not related to the actual event.

> SOS, and SOSOS etc are easy to calculate. There is probably some
> rationale for them in most situations, for SOS more than SOSOS. They
> for sure involve some chance as well, in many cases. But they can not
> be "worse" than throwing dice, since we are prepared to believe that
> they may have more to do with the players' achievements than throwing
> dice. So why not use them, if tiebreakers are required. It is a great
> advantage for the organiser to be sure that a winner can really be
> determined - it provides for a greater choice of prizes and awards.

Well that all boils down to the purpose of a tournament. Correct me if I
am wrong here but I think MacMahon has been developed to allow many
winners i.e. guys who had some good proportion of wins get a handshake
of a main organizer and a small something. All the changes to the basic
rule were meant to adjust for stronger players who are interested in
being the winner of the tournament. So we really have two groups of
players and two goals. This means that you need really two different set
of rules. As long as very strong players from Asia come here mostly to
promote the great game they accept that tournaments in Europe are
different. This will have to change if needs of stronger players are to
be addressed and further development of baduk in Europe is to be
supported leading maybe to establishment of a profi league (possible
probably only on Eu level as for number of players) some time in the
future.
I wonder how this problem is resolved in say Korea - they surely have
tournaments where people play for fun only - do they use Macmahon system
(I doubt) or have something else? Or is system which gives possibility
to praise weaker players too not known there?


//

henricb...@gmail.com

unread,
Jan 15, 2007, 10:41:10 AM1/15/07
to

gaga skrev:

> henricb...@gmail.com wrote:
> > conditions, in an important game. A more fundamental problem: I think
> > it is permissible to make the assumption that some players win more
> > easily against particular opponents - i.e. that situations exist where
> > A wins more often against B, who wins more often against C.

Correction: I meant of course to add that C tends to win against A.

> If only chance is taken into consideration then we can drop a dice and
> get the result this way.

Obviously. Nobody is going to suggest a tournament where only chance is
considered though.

> How to ensure that only performance (however
> variable) can be appreciated in some way is a problem that any chance
> rule does not solve.

True. My suggestion was that it's not possible to ensure that only
performance comes in, given the constraints, such as limited number of
rounds.

A tournament where everybody meets everybody, in combination with
playoff in the case of a tie does eliminate chance, except for any
chance there may be in whatever is going on at the boards. But in
normal circumstances it is very impractical not to be able to predict
how many games will be required to determine the winner. If one wants
moreover to distribute the 2nd, 3d etc places and not only the winner,
the problem becomes extremely complicated.

> Adding complications and some binds to reality and
> performance (or at least to the way the player lived through the
> tournament) does not change much - I would not bother with use of
> complicated tools if only thing they do is obscure the fact that the
> results (due to tie-breakers) are not related to the actual event.

Indeed, why bother.

> Well that all boils down to the purpose of a tournament.

Of course.

>Correct me if I
> am wrong here but I think MacMahon has been developed to allow many
> winners i.e. guys who had some good proportion of wins get a handshake
> of a main organizer and a small something.

That's correct, plus that the MM system ensures that the participants
get to play opponents who are approximately at the same level, and that
the player who performs best will probably win the tournament.

> being the winner of the tournament. So we really have two groups of
> players and two goals. This means that you need really two different set
> of rules.

Well, if we are discussing the rules for any single tournament, it's
rather a matter of finding a compromise which can satisfy all the
different goals reasonably well. I think the MM system, including SOS
and the like, is a rather optimised compromise. My first point is that
it's probably futile to spend much effort on finding improvements on
SOS etc, as one can not expect to improve things a lot anyway,
especially not in terms of "fairness", since "fairness" doesn't suffer
much from an element of chance. My second point is that I disagree with
Robert in that I don't think there is a lot to be gained from splitting
the win, rather than aplying some admittedly arbitrary tiebreaker. One
doesn't gain anything in terms of fairness by splitting the win, as
opposed to awarding it according to preestablished rules which involve
an element of chance. It's often preferable from a practical point of
view to be able to award say 1st, 2nd and 3rd prize without hesitation.
Prize money can of course be split easily enough, but not titles,
honorary trophies, qualification for a higher level tournament and so
on. I've been to tournaments where even last resort tiebreakers like
"the oldest playeer" were used. A tiebreaker like that can hardly be
called unfair - after all we all get older at the same rate.

> As long as very strong players from Asia come here mostly to
> promote the great game they accept that tournaments in Europe are
> different. This will have to change if needs of stronger players are to
> be addressed and further development of baduk in Europe is to be
> supported leading maybe to establishment of a profi league (possible
> probably only on Eu level as for number of players) some time in the
> future.

True, there may be a need for new formats in such a context in the
future. Qualification in many steps, round robin tournaments, knock out
tournaments etc. will be more used then.
The European Oza and the Dutch proposals for reforming the top group in
the European Championship are perhaps good examples to study. But I
suppose the original poster was more concerned with the bulk of open
European tournaments.

cheers,
H.

Robert Jasiek

unread,
Jan 15, 2007, 11:04:25 AM1/15/07
to
On 15 Jan 2007 05:03:51 -0800, henricb...@gmail.com wrote:
>The notion of eliminating chance completely from the process, in my
>opinion is futile

Chance cannot be eliminated but its impact can be minimized.

>Chance is not unfair, it is fair, it doesn't discriminate between
>people.

One of my worst memories of what I consider unfair chance was a
MacMahon tournament in that SOS gave me place 2 (and a much smaller
prize) when direct comparison would have given me place 1.

>SOS, and SOSOS etc are easy to calculate. There is probably some
>rationale for them in most situations

When you look at many 5 rounds MacMahon weekend tournaments, the top
three players have often played only a relatively small number of
games against each other. The final SOS standings are then decided by
how badly each top player's weakest opponent plays.

>they can not be "worse" than throwing dice

This is an absolute minimal requirement for a tiebreaker to be better
than utter nonsense. However, this aspect is very overemphasized. It
is much more relevant how a tiebreaker compares to not using any
tiebreaker.

>It is a great


>advantage for the organiser to be sure that a winner can really be
>determined - it provides for a greater choice of prizes and awards.

Which greater choice? Of course, you also get the option to distribute
trophies. However, it is very well possible to share winning places
and use a tiebreaker for only one purpose: To see who gets which
trophy.

>The point where I'm most suspicious myself
>regards possible regional discrepancies in ranks, both in dan ranks and
>in EGF GoR. If people from some countries/regions are relatively under
>rated / under ranked, they are not assigned fairly to MM groups in MM
>tournaments.

Yes.

--
robert jasiek

Robert Jasiek

unread,
Jan 15, 2007, 11:19:09 AM1/15/07
to
On 15 Jan 2007 07:41:10 -0800, henricb...@gmail.com wrote:
>I think the MM system, including SOS
>and the like, is a rather optimised compromise.

Depends on how far we bend "rather" :)

>My first point is that
>it's probably futile to spend much effort on finding improvements on
>SOS etc, as one can not expect to improve things a lot anyway,

I consider it a great improvement if splitting hairs (SOS) is replaced
by acknowledging the presence of noise (not using any tiebreaker).

Smaller improvements are also possible: E.g., replace SOS by SOS-2 and
do not apply it if direct comparison (is unambiguous and) disagrees.

>My second point is that I disagree with
>Robert in that I don't think there is a lot to be gained from splitting
>the win, rather than aplying some admittedly arbitrary tiebreaker. One
>doesn't gain anything in terms of fairness by splitting the win, as
>opposed to awarding it according to preestablished rules which involve
>an element of chance.

What about becoming stronger to experience suffering from tiebreakers
more often? :)

>Prize money can of course be split easily enough, but not titles,

It is very easy to split a title: Declare the tied players to share
it.

>honorary trophies, qualification for a higher level tournament and so
>on.

There is not much "and so on". E.g., go books as prizes are not a
problem because it is a good custom to let every book prize getter
choose his preferred books anyway.

>I've been to tournaments where even last resort tiebreakers like
>"the oldest playeer" were used.

O yeah, some time I witnessed the sixth tiebreaker "draw lots" being
used in a seeding tournament...

--
robert jasiek

henricb...@gmail.com

unread,
Jan 15, 2007, 1:14:37 PM1/15/07
to

Robert Jasiek skrev:

> I consider it a great improvement if splitting hairs (SOS) is replaced
> by acknowledging the presence of noise (not using any tiebreaker).
>
> Smaller improvements are also possible: E.g., replace SOS by SOS-2 and
> do not apply it if direct comparison (is unambiguous and) disagrees.
>
> >My second point is that I disagree with
> >Robert in that I don't think there is a lot to be gained from splitting
> >the win, rather than aplying some admittedly arbitrary tiebreaker. One
> >doesn't gain anything in terms of fairness by splitting the win, as
> >opposed to awarding it according to preestablished rules which involve
> >an element of chance.
>
> What about becoming stronger to experience suffering from tiebreakers
> more often? :)
>
> >Prize money can of course be split easily enough, but not titles,
>
> It is very easy to split a title: Declare the tied players to share
> it.
>
> >honorary trophies, qualification for a higher level tournament and so
> >on.
>
> There is not much "and so on". E.g., go books as prizes are not a
> problem because it is a good custom to let every book prize getter
> choose his preferred books anyway.

I wonder if you are missing the core of my argument?

If two people tie for the first place you suggest to split the prize,
i.e. to award equal benefits to both players. But this is equivalent to
throwing dice: the expectation value for the gain does not change for
any of the two.

Now, if you admit that SOS may at least have something more to do with
performance in the tournament than throwing dice, then there is at
least some improvement, compared to the random or splitting case.
Nothing prevents you from recognising the presence noise anyway, just
go ahead and regognise it: SOS is noisy.

For a tournament organiser it makes life much easier not to be
compelled to split the prize. The tournament may be used for
qualification to something else. One can order gold, silver and bronze
prize trophies with "1:st prize, XXX-Open Year YYYY" written on them,
or whatever. One can appoint the ZZZ Champion YYYY etc. One can award
some tasty piece of comestible, or a bottle of Champagne. Or let the
winner have the first pick at the book table.

I'm not a very strong player, but incidentally it happened to mee too
to miss the first prize in a tournament as a very questionable SOS
effect (having to do with the flickering on and off of the MM points
for a player who had not played all rounds and consequently got 0.5
points per game not played in the rounds not played, but rounded off to
integer). The gist of the fairness argument is that it was just tough
luck for me, it could equally well have happened to anybody else, hence
one can not call it unfair, just unlucky :-)

best regards,
H.

Robert Jasiek

unread,
Jan 15, 2007, 2:15:14 PM1/15/07
to
On 15 Jan 2007 10:14:37 -0800, henricb...@gmail.com wrote:
>If two people tie for the first place you suggest to split the prize,
>i.e. to award equal benefits to both players. But this is equivalent to
>throwing dice

It is not equivalent: If the tie is not broken, then there are two or
more players that SHARE the same place. If the tie is broken by
thowing dice (note: drawing lots instead of throwing dice ensure to be
done after a finite amount of time), then there is exactly one player
that gets the higher place while the other player(s) get lower places.

>the expectation value for the gain does not change for
>any of the two.

If the tie is not broken, then the expectation value for each tied
player to get the shared top place is 1. If the tie is broken by
drawing lots, then the expectation value for each tied player (of p
tied players) to get the top place is 1/p.

>Now, if you admit that SOS may at least have something more to do with
>performance in the tournament than throwing dice,

Also not using any tiebreaker has more to do with performance in the
tournament than throwing dice because not using any tiebreaker does
not take away anything of the player's performance in the tournament.
Throwing dice takes away some informatioon because it adds new noise
to the player's performance in the tournament.

>then there is at least some improvement, compared to the random

Yes.

>or splitting case.

No.

>For a tournament organiser it makes life much easier not to be
>compelled to split the prize.

Splitting the prize is easy if only the organizer has chosen a
divisible prize. Unless we have a special case of a requirement for
undevisible prizes (seeding places or flight tickets), there is no
need to choose undivisible prizes.

>One can order gold, silver and bronze
>prize trophies with "1:st prize, XXX-Open Year YYYY" written on them,
>or whatever.

Of course, one can. As much as one can throw away any trophy, as I
have done.

But using trophies is unncessary and preprinting their functions is
also unncessary.

>One can award
>some tasty piece of comestible, or a bottle of Champagne. Or let the
>winner have the first pick at the book table.

There is no need to split places if all one wants to split is the
order of access to choose among food or books of similar value. It
suffices to keep the places tied and draw lots for the sole purpose
of... - but I have said that already in an earlier mail.

--
robert jasiek

henricb...@gmail.com

unread,
Jan 15, 2007, 4:40:17 PM1/15/07
to

Robert Jasiek skrev:

> On 15 Jan 2007 10:14:37 -0800, henricb...@gmail.com wrote:
> >If two people tie for the first place you suggest to split the prize,
> >i.e. to award equal benefits to both players. But this is equivalent to
> >throwing dice
>
> It is not equivalent: If the tie is not broken, then there are two or
> more players that SHARE the same place. If the tie is broken by
> thowing dice (note: drawing lots instead of throwing dice ensure to be
> done after a finite amount of time), then there is exactly one player
> that gets the higher place while the other player(s) get lower places.
>
> >the expectation value for the gain does not change for
> >any of the two.
>
> If the tie is not broken, then the expectation value for each tied
> player to get the shared top place is 1. If the tie is broken by
> drawing lots, then the expectation value for each tied player (of p
> tied players) to get the top place is 1/p.

That's an interesting point of view.
Am I to understand that the value of the shared prize does not diminish
as 1/p ?
If the value of the shared prize falls off slower than 1/p with p, then
don't we have to conclude that the way to make people most happy must
be to share the first prize between all participants?

> >For a tournament organiser it makes life much easier not to be
> >compelled to split the prize.
>
> Splitting the prize is easy if only the organizer has chosen a
> divisible prize. Unless we have a special case of a requirement for
> undevisible prizes (seeding places or flight tickets), there is no
> need to choose undivisible prizes.

Obviously not, but the point is that some players avtually like those
undivisible prizes. It also happens that the organiser can make a good
bargain for the prize winner by accepting some undivisible object as
prize from a sponsor.

> >One can order gold, silver and bronze
> >prize trophies with "1:st prize, XXX-Open Year YYYY" written on them,
> >or whatever.
>
> Of course, one can. As much as one can throw away any trophy, as I
> have done.
>
> But using trophies is unncessary and preprinting their functions is
> also unncessary.

I'm not sure I follow, I'm afraid. Of course it is unnecessary, but
surely it is also unnecessary to organise any tournament at all?
We organise tournaments because we like them and to give the
participants the opportunity to play go and enjoy themselves, don't we?

regards,
henric

entp...@yahoo.com

unread,
Jan 15, 2007, 7:31:25 PM1/15/07
to

ian wrote:

> Most tournaments in the Europe are EGF rated. Is it possible to start
> using GoR as a tiebreaker. Often when number of wins are equal we try
> to see who played the best with SOS, SOSOS, SODOS or others.
> Is it possible to use 'tournament performance GoR' as the tiebreaker.
> This being the rating at which they would have entered at to experience
> no rating change.
>
> For 100% wins, this is useless of course.

If the tiebreaker is desired to reflect the tournament performance as
much
as possible, it is better to calculate rating changes as many times as
it
takes for the ratings to converge. What you get is independent of
ratings at the
start of the tournament. You merely redistribute the total sum of
points
at the start of the tournament according to the results in the
tournament.

TK

Robert Jasiek

unread,
Jan 16, 2007, 3:02:44 AM1/16/07
to
On 15 Jan 2007 13:40:17 -0800, henricb...@gmail.com wrote:
>Am I to understand that the value of the shared prize does not diminish
>as 1/p ?

This is a highly subjective matter. My personal feeling about getting
a shared place (or seeing others get it in Go, other sports, or a
Nobel prize) is that idealistically it is worth almost as much as
getting the same place alone (regardless of the (money) prize being
shared).

>If the value of the shared prize falls off slower than 1/p with p, then
>don't we have to conclude that the way to make people most happy must
>be to share the first prize between all participants?

Between all players TIED FOR FIRST PLACE. (All participants, really
what a nonsense, except in a round-robin if there should be an all-way
tie ;) )

>Obviously not, but the point is that some players avtually like those
>undivisible prizes.

I know only few that do.

>It also happens that the organiser can make a good
>bargain for the prize winner by accepting some undivisible object as
>prize from a sponsor.

Sure, some sponsors are as unreasonable as tiebreakers.

>I'm not sure I follow, I'm afraid. Of course it is unnecessary, but
>surely it is also unnecessary to organise any tournament at all?
>We organise tournaments because we like them and to give the
>participants the opportunity to play go and enjoy themselves, don't we?

It is possible to enjoy organization and participation in tournaments
without having or getting indivisible prizes.

--
robert jasiek

henricb...@gmail.com

unread,
Jan 18, 2007, 7:09:14 AM1/18/07
to

Robert Jasiek skrev:

> On 15 Jan 2007 13:40:17 -0800, henricb...@gmail.com wrote:
> >Am I to understand that the value of the shared prize does not diminish
> >as 1/p ?
>
> This is a highly subjective matter. My personal feeling about getting
> a shared place (or seeing others get it in Go, other sports, or a
> Nobel prize) is that idealistically it is worth almost as much as
> getting the same place alone (regardless of the (money) prize being
> shared).

Some may think that this discussion is silly, but I honestly think it
is a bit interesting, from a philosophical point of view.

We have already all agreed that SOS, SOSOS etc are not worse than
selecting the winner randomly. In the worst case they are equivalent to
throwing dice, but they may have a bit more to do with the performance
of the player in the tournament.

Now for sharing prizes versus attributing them according to SOS etc:

Robert claims that when p players tie for the win they MAY all have
performed equally well. It is then fair to split the prize in equal
parts of course. (Possibly) equal performance entails equal benefits.
But it must be at least equally fair to give the whole prize to the
player with the highest SOS for instance. Even if SOS in the particular
case is not related at all to performace, only random noise, it will
still be true that the players who may have performed equally well had
apriori the same chance (probability) to win the first prize. Now, if
SOS is not completely random, but does favor the player who performed
best, then it can not be considered unfair to give the prize to him/her
- treating the players either according to performance or equally must
both be considered fair, in the context of a competition.

As far as I can see, it is not possible to claim that splitting the
prize would be more fair than giving it to the guy with the highest
SOS. It can still be better for some other reason, of course. Robert is
applying an argument to the effect that sharing a prize among p players
is worth more to each of them than the fraction 1/p of what it would
have been worth to win the first prize alone.

Clearly this can not be a correct assumption as far as money prizes are
concerned. The whole idea behind money is that its value is
proportional to the amount: A/p is worth exactly 1/p times A. In that
situation, the value of winning the amount A with probability 1/p is
exactly the same as winning the amount A/p with probability 1. Money is
the easiest to split, but paradoxically we see that there is no
rational ground for splitting money prizes, rather than giving the lot
to the guy with the highest SOS.

We then come to other values, such as the honour in sharing the first
prize. Robert admits that this kind of value is subjective. Personally
he rejects the model that sharing the first prize between p players
would only be worth 1/p of winning the first prize alone. Robert feels
that it is actually worth almost the same as winning the prize alone.
Apparently he suggests to use tournament rules which maximise the total
value (honour) to all players who tie, and he feels that this total
value is higher if the prize is shared than if one takes it all.

> >If the value of the shared prize falls off slower than 1/p with p, then
> >don't we have to conclude that the way to make people most happy must
> >be to share the first prize between all participants?
>
> Between all players TIED FOR FIRST PLACE. (All participants, really
> what a nonsense, except in a round-robin if there should be an all-way
> tie ;) )

Who ties for first prize depends of course on the tournament rules.
Robert does not accept the reductio ad absurdum argument that if it is
worth more than 1/p times the value of winning alone to share the win
with (p-1) other players, then the total value is maximised if all
participants are allowed to share the first prize, so he must have in
mind some complicated value function V(p), or a V which depends on
things other than p.

In the past, I always thought that splitting money and other divisible
prizes was a good thing to do, whereas splitting titles and honorary
tokens would be impractical. But a more careful examination shows that
there is no point in splitting money and other truly divisible values.
At least Robert feels that splitting the honour gives more value to
players who tie than giving it all to one of them based on SOS. That
assessment all depends on the admittedly pretty esoteric properties of
an honorary value function V(p,...). It would be interesting to see
some other people's opinion on that V function!

cheers,
Henric

Robert Jasiek

unread,
Jan 18, 2007, 9:24:41 AM1/18/07
to
On 18 Jan 2007 04:09:14 -0800, henricb...@gmail.com wrote:
>We have already all agreed that SOS, SOSOS etc are not worse than
>selecting the winner randomly.

Among those otherwise tied for first place.

>In the worst case they are equivalent to throwing dice,

No. In the worst, they are worse than not using any tiebreaker. A
comparison to not using any tiebreaker is much more relevant than a
comparison to drawing lots (what you call throwing dice).

>but they may have a bit more to do with the performance
>of the player in the tournament.

How do you define "performance of the player in the tournament"?

>Now for sharing prizes versus attributing them according to SOS etc:
>
>Robert claims that when p players tie for the win they MAY all have
>performed equally well.

It depends on what exactly we choose as definition for "performance of
the player in the tournament". E.g., if we should define it as "his
number of wins / MacMahon points", then the players in question DO
have perfomed equally well. Saying "MAY" presumes a different
definition.

>Even if SOS in the particular
>case is not related at all to performace, only random noise, it will
>still be true that the players who may have performed equally well had
>apriori the same chance (probability) to win the first prize.

Yes.

>Now, if
>SOS is not completely random, but does favor the player who performed
>best,

For being enabled to say so, we first need some definition of
"performing better". Or better yet, several definitions to compare
conclusions for each.

>then it can not be considered unfair to give the prize to him/her
>- treating the players either according to performance or equally must
>both be considered fair, in the context of a competition.

Your conclusion is too rash because it depends on 1) the chosen
definition of "performing better", 2) a comparison of various possible
definitions with each other, 3) a comparision of the quality relation
of SOS versus random to SOS versus using no tiebreaker to SOS versus
other tiebreakers worth studying.

If we start our a priori evaluation of SOS AFTER having already
installed it to a particular tournament, then (3) becomes immaterial
and it becomes meaningless to compare usage of SOS as tiebreaker
versus usage of no tiebreaker versus usage of other conceivable
tiebreakers. I.e., once the tournament has been announced to be using
SOS, players are (in theory) free not to participate because of that
tiebreaker. However, thereby we do not know if the tournament could,
in some sense, have been fairer if another tiebreaker or no tiebreaker
at all had been used. It is much more interesting to include also a
study of (3), i.e. to assume an abstract tournament for that we search
the best possible tiebreaker (which possibly might be to use no
tiebreaker at all).

>As far as I can see, it is not possible to claim that splitting the
>prize would be more fair than giving it to the guy with the highest
>SOS.

So far it is not really possible to say this or the opposite because
careful and extensive studies including (1), (2), and (3) have not
been made yet.

>Robert is
>applying an argument to the effect that sharing a prize among p players
>is worth more to each of them than the fraction 1/p of what it would
>have been worth to win the first prize alone.

I am not using this argument to compare which of using no tiebreaker
or using SOS is fairer.

>Clearly this can not be a correct assumption as far as money prizes are
>concerned.

The perceived idealistic value of a (possibly shared) place and the
available money per (possible shared) prize should not be compared
indeed.

>The whole idea behind money is that its value is
>proportional to the amount:

Unless we speak of nonillions of ounces of gold, of course :)

>we see that there is no
>rational ground for splitting money prizes, rather than giving the lot
>to the guy with the highest SOS.

There is no rational ground within your too limited model. With the
same kind of limited argument, you can come to the same conclusion if
you replace, in your argument, SOS by the player's period of time
since he has started learning go, which also happens to be better than
drawing lots on average.

>Apparently he suggests to use tournament rules which maximise the total
>value (honour) to all players who tie,

I don't.

>and he feels that this total
>value is higher if the prize is shared than if one takes it all.

No.

>Robert does not accept the reductio ad absurdum argument that if it is
>worth more than 1/p times the value of winning alone to share the win
>with (p-1) other players, then the total value is maximised if all
>participants are allowed to share the first prize,

You assume wrongly that I would want to maximize a total of idealistic
values. (I want a careful and extensive analysis and comparision of
tiebreakers and using no tiebreaker, see above.)

>a more careful examination shows that
>there is no point in splitting money and other truly divisible values.

Your conclusion is too rash.

>At least Robert feels that splitting the honour gives more value to
>players who tie than giving it all to one of them based on SOS.

Where "value" is something that should not be evaluated numerically.

>That
>assessment all depends on the admittedly pretty esoteric properties of
>an honorary value function V(p,...). It would be interesting to see
>some other people's opinion on that V function!

LOL.

--
robert jasiek

Jouni Karvo

unread,
Jan 18, 2007, 9:54:16 AM1/18/07
to

hi,

I think there are two mixed things in this discussion: samples and
averages

- throwing dice is of course fair (in some sense), but it produces the
average profit of 1/p only in a series of tournaments

- using no tie-breaker produces the same 1/p average profit with only
one tournament

- the unfairness of tie-breakers is discussed per tournament basis,
where it can produce an "unfair" sample

I wonder if the question should be if the tiebreaker sensibility
should be assessed as the fairness of the average profit when the
tournament number grows.

Naturally it would be difficult to model it this way, too ;)

yours,
Jouni

Roy Schmidt

unread,
Jan 18, 2007, 12:57:47 PM1/18/07
to
<henricb...@gmail.com> wrote:

> As far as I can see, it is not possible to claim that splitting the
> prize would be more fair than giving it to the guy with the highest
> SOS. It can still be better for some other reason, of course. Robert is
> applying an argument to the effect that sharing a prize among p players
> is worth more to each of them than the fraction 1/p of what it would
> have been worth to win the first prize alone.

Clearly this is so. If one awards the top place prize money to one
individual on the basis of tie breaks, then what of the second place money?
Third place? If the tie is not broken, then the standard solution is to add
together the money for the three (or n) places for the three (or n) tied
individuals and splitting that sum evenly amongst them. For the player with
the third best SOS, this would be much more desirable. And if SOS is
usually the same as a dice roll, the player with third best SOS has been
treated unjustly. The tournament organizer has forced this player to gamble
away some of the prize money.

> We then come to other values, such as the honour in sharing the first
> prize. Robert admits that this kind of value is subjective. Personally
> he rejects the model that sharing the first prize between p players
> would only be worth 1/p of winning the first prize alone. Robert feels
> that it is actually worth almost the same as winning the prize alone.
> Apparently he suggests to use tournament rules which maximise the total
> value (honour) to all players who tie, and he feels that this total
> value is higher if the prize is shared than if one takes it all.

Of course, the exception to this would be those situations where the top
finisher qualifies for some further competition. In such case, some sort of
tie-breaking playoff would be appropriate. If we consider the sense of
amateur competition, then prize money is only a small token of success. We
are mostly focused on the honor of winning -- that is the primary motivation
of amateur competition (if it is not your motivation, then you need a
reality check!). So being listed as co-champion is indeed worth more than
1/p, and infinitely more valuable than being listed as 1st runner-up, even
if both places are paid the same.

Cheers, Roy

--
my reply-to address is gostoned at insightbb dot com
------
The Bradley Go Association meets every Thursday evening at Kade's Coffee on
War Memorial Drive (opposite the Target/Cub Food/Lowe's/Best Buy center).

henricb...@gmail.com

unread,
Jan 19, 2007, 5:14:39 AM1/19/07
to

Robert Jasiek skrev:

> >but they may have a bit more to do with the performance
> >of the player in the tournament.
>
> How do you define "performance of the player in the tournament"?

I don't think the argument requires a particular definition of
performance,
or agreement on a single definition. The only thing that's needed is
that
"performance" be a measure with the property (let's call it the
F-property),
that we think it's fair to award prizes according to "performance".

What we have in mind is of course that winning many games shows good
performance, but possibly also winning games against strong opponents,
winning games which were hard to win etc. Even though it may be hard to
prove or quantify, the premise is that "perfomance" may have a positive
correlation with SOS, SOSOS etc, and at any rate not a negative
correlation.


> >Robert claims that when p players tie for the win they MAY all have
> >performed equally well.
>
> It depends on what exactly we choose as definition for "performance of
> the player in the tournament". E.g., if we should define it as "his
> number of wins / MacMahon points", then the players in question DO
> have perfomed equally well. Saying "MAY" presumes a different
> definition.

See above.

> Your conclusion is too rash because it depends on 1) the chosen
> definition of "performing better",

See above.

> 2) a comparison of various possible
> definitions with each other,

See above

> 3) a comparision of the quality relation
> of SOS versus random

Yes, but there seems to be consensus that SOS is either equivalent to
random or better

> to SOS versus using no tiebreaker

Right, that was the comparison I was making. My conclusion was that
there is either no difference in terms of fairness, or SOS is more
fair, depending on 3)

> to SOS versus other tiebreakers worth studying.

which was of course the start of the discussion. My conclusion
regarding SOS does not depend on that, but let me restate that SOS has
the advantages compared to Ians tiebreaker that it only involves the
current tournament and that it's easier to calculate.

> tiebreakers. I.e., once the tournament has been announced to be using
> SOS, players are (in theory) free not to participate because of that
> tiebreaker.

True.

> However, thereby we do not know if the tournament could,
> in some sense, have been fairer if another tiebreaker or no tiebreaker
> at all had been used.

I object to the use of "fairer" here. Better maybe, such as e.g. making
the
participants happier, on an average, but not fairer. I don't accept
that no tiebreaker is fairer.
If you can find another tiebreaker, which can be shown to correlate
better with performance (according to some acceptable definition), then
yes, that might be seen as fairer.

> It is much more interesting to include also a
> study of (3), i.e. to assume an abstract tournament for that we search
> the best possible tiebreaker

This might be interesting, but is it really worth while? Simplicity
will always be a great advantage, so you don't want to invent anything
that's much more complicated than SOS, that poses tight limits for the
investigation.

>(which possibly might be to use no tiebreaker at all).

Again, I think not. No tiebreaker may be better, but not fairer, in my
opinion.

> >Robert is
> >applying an argument to the effect that sharing a prize among p players
> >is worth more to each of them than the fraction 1/p of what it would
> >have been worth to win the first prize alone.
>
> I am not using this argument to compare which of using no tiebreaker
> or using SOS is fairer.

No, in fact it can not be used for that purpose. But you do use an
argument
like that to claim that no tiebreaker is better, don't you?

> >The whole idea behind money is that its value is
> >proportional to the amount:
>
> Unless we speak of nonillions of ounces of gold, of course :)

Ok, perhaps proportionality does not hold either for very large or for
very small sums, but we can easily restrict the range to where it does
hold.

Jouni:


>I wonder if the question should be if the tiebreaker sensibility
>should be assessed as the fairness of the average profit when the
>tournament number grows.

Why not. But if there is an issue here, it has to do with semantics and
ethics related to "fairness" and "probability". I can't see why it
would not be fair to let people enter a game where at some points
players who have performed equally well are allowed to make some profit
with equal probability. Suppose you award lots to two winners who tie
for first place. Both lots have equal winning chances. I think that
must be fair. I don't think it is correct, if only one of the players
then is lucky and wins a new sportscar, to claim afterwards that the
tournament was unfair.

Let me add that I think those who are keen to win a tournament should
simply win all their games. If they do, it is very rare that they don't
win the tournament, provided that it has been sensibly organised.

cheers,
H.

Jouni Karvo

unread,
Jan 19, 2007, 2:48:53 PM1/19/07
to

hi,

henricb...@gmail.com writes:
>
> I don't think the argument requires a particular definition of
> performance,
> or agreement on a single definition. The only thing that's needed is
> that
> "performance" be a measure with the property (let's call it the
> F-property),
> that we think it's fair to award prizes according to "performance".

Ah, but I think that if you wish to find some "fairness" for the
results of a tournament, you actually need some definition for what is
good "performance".

Although there is a danger I state the obvious, I'll give an example
of a tournament having four players, each playing with each other, and
where draws result in a re-match.

Let A->B mean A won B. The results are:
A->B
B->C
C->A
A->D
B->D
C->D

The wins:
A: 2
B: 2
C: 2
D: 0

I guess we can all agree that D did not win, but which one of the
other players did perform best in the tournament?

If you use some GoR tiebreaker, you have already made a definition
where tournament performance in effect is determined by performance in
earlier tournaments instead of the current one.

So if you wish to define tournament performance so that it is only
tournament performance, you have several options:

- share the first price
- declare the result undecided, and not giving a first price
- continue with players A,B and C, and make another full round, until
you can get some differences to the result. Naturally one of them
might need to catch the flight home and cannot take part in the
some of the latter rounds - so his performance in the
tournament is worse than that of the others. Is this fair?
... well this is probably already out of topic, but could not resist ;)

So I do think it is essential to first define what does it mean to be
the best in the tournament. The easy way to do it is that everyone
plays with everyone, and the invincible wins. If there are loops, as
in this exaggerated example, it is not clear at all who was the
best. And in the worst case, where all are not playing with each
other, the graph might even be disconnected in separate parts. Then
the definition of the best performance needs to be something other.


> >I wonder if the question should be if the tiebreaker sensibility
> >should be assessed as the fairness of the average profit when the
> >tournament number grows.
>
> Why not. But if there is an issue here, it has to do with semantics and
> ethics related to "fairness" and "probability". I can't see why it
> would not be fair to let people enter a game where at some points
> players who have performed equally well are allowed to make some profit
> with equal probability. Suppose you award lots to two winners who tie
> for first place. Both lots have equal winning chances. I think that
> must be fair. I don't think it is correct, if only one of the players
> then is lucky and wins a new sportscar, to claim afterwards that the
> tournament was unfair.

Sure, then the losing player did not understand the asymptotic
fairness (s)he achieves when his/her tournament number approaches
infinity ;) (provided the tournament system was defined so)

> Let me add that I think those who are keen to win a tournament should
> simply win all their games. If they do, it is very rare that they don't
> win the tournament, provided that it has been sensibly organised.

That is indeed the only way, if my scribblings and the example are
anywhere near reasonable. However, considering as an example some
person entering a tournament using MM pairings with a significantly
low initial ranking compared to the playing skills, there is surely a
possiblity of winning all games without winning the tournament.

yours,
Jouni

Robert Jasiek

unread,
Feb 1, 2007, 3:00:01 PM2/1/07
to
On 19 Jan 2007 02:14:39 -0800, henricb...@gmail.com wrote:
>particular definition of performance [...about fairness and research...]

Having been without internet for some days, I am out of a suitable
mode for continuing this discussion now. We might resume it later,
especially when more research should be available.

>there seems to be consensus that SOS is either equivalent to
>random or better

Whatever. One thing should not be forgotten though: What is the
purpose of comparing something with "random"? Random is not any
measure of quality related to Go skill. If a tiebreaker is compared to
random, then this comparison should thus not be made to measure Go
skill or increase a quality of its measure - it should only be made to
assess some abstract, mathematical quality of the (tiebreaker) tool.

Since SOS is (said to be) better than random, it passes the
mathematics test of being a tool worth further consideration. That
further consideration then and only then might be about judging about
quality of a relation between SOS and Go skill, and whether the latter
increases or decreases the precision of assessing Go skill.

Details of this deserve research.

--
robert jasiek

0 new messages