I built a model with the following parameters:
1. An individual has a given "true" rating.
2. The FIBS rating formula accurately describes this player's likelihood
of winning a match vs. other players, based on their ratings.
3. Player plays only 5-point matches, against opponents who randomly have
ratings of 1500, 1600, 1700, 1800, or 1900.
4. Ratings were set assuming experience >400.
I'm running a number of tests on it. The one I've done so far concerns a
player with a "true" rating of 1800, who starts at 1800. The question is,
if (s)he plays, how high or low will the rating go.
I found that, based on the following number of matches, the AVERAGE
maximum and minimum ratings achieved were:
1000 1881 / 1719
2500 1902 / 1698
5000 1916 / 1682
10000 1929 / 1668
This suggests that for players who play a lot, rating variations of 200
from high to low are not surprising, or alternatively, variations of 100
to 125 from their "true" rating.
In all cases, I am simulating at least 10,000 player histories, sometimes
more. In other words, I am simulating tens of millions, up to billions,
of matches. The standard error of my results is likely to be quite small.
I'm going to use the model to run some other tests, and I'll bore the
newsgroup with the results.
- Hank (Hank_VaUSA)
-Patti
--
Patti Beadles |
pat...@netcom.com/pat...@gammon.com |
http://www.gammon.com/ | "I trust you. It's just
or just yell, "Hey, Patti!" | that I'm scared of you."
Of course, this could all be moot if the rankings do indeed presently
reflect the actual ratings rankings, and maybe the reason why playerA's
ranking hasn't changed is because out of the thousands of players who
have id's and ratings rankings on FIBS there just weren't any of them
recently playing who won/lost to cause playerA to move up or down in the
rankings.