Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Cardinal Classic Results (unofficial)

2 views
Skip to first unread message

Doug Bone

unread,
Feb 21, 1995, 7:53:06 PM2/21/95
to
My apologies for the errors in my previous post. What follows
are still UNOFFICIAL results from the Cardinal Classic, though they
were generated from looking at the stat sheets. I don't doubt there
are errors. Official results and player stats will be posted later.

1. Berkeley A 14-1 lost to Michigan 305-150, beat BYU A 235-165 on
the tournament's hardest packet, beat Cal B 245-225
Most of this team will not play at the CBI RCT.
I don't know what their plans are for ACF regionals.
2. BYU A 14-1 lost to Cal A; beat Vandy A 290-270 in RR; scored
690 against Davis A; beat Cal B 215-205
3. Vandy A 12-3 lost to BYU A, Cal A, and Cal C
Vandy looked good but not great when I moderated
for them in the round robin, but they looked
very impressive in the playoffs defeating BYU A
305-160 in the semifinal and then beating Cal A
fairly substantially to win the tournament. They
seem better than previous Vandy teams at earlier
Cardinal Classics.
4. Berkeley B 12-3 lost to Vandy A, BYU A, Cal A (the three top teams)
5. Berkeley C 11-4 lost to BYU A, Cal A 215-365, Cal B 175-225, and
Stanford; Berkeley has an incredibly deep program
and dominated the tournament as a school; they also
placed three teams in the top three of their own
tournament
6. Stanford 8-7 lost to Cal A, Cal B, BYU A, Vandy A, Vandy B, BYU C,
and Fresno B; everyone who wanted to play was in for
a few games; it was an 8-person team with everyone
playing 50% of the time or so, except for Gerard and I
7. Michigan 8-7 lost to Stanford, Georgia State, BYU A, Cal B, Cal C
BYU A and BYU B; only team to beat Cal A (305-150)
Dave Frazee was among the tournament individual leaders
at the halfway mark
8. Fresno B 7-8 Beat Stanford, Georgia State, Fresno A, Vaandy B,
Vandy C, Davis A, and Davis B; probably the favorite
for third in R15; played some of the top teams close
9. Vandy B 7-8 beat Stanford, Georgia State, BYU B, BYU C, Vandy C,
Davis A, Davis B
10. Georgia St 7-8 beat Michigan, BYU B, BYU C, Fresno A, Vandy C, Davises
11. BYU B 6-9 beat Michigan, Fresno B, BYU C, Vandy C, Davises
12. BYU C 6-9 beat Stanford, Fresnos, Vandy C, Davises
13. Fresno A 4-11 beat BYu B, Vandy B and C, Davis B
14. Davis A 3-12 beat Vandy C, Fresno A, Davis B; lost by 20 to Cal B
15. Vandy C 1-14 beat Davis B
16. Davis B 0-15 lost to Cal B 165-170

Comments:
BYU A and Cal A were the class of the round-robin portion of the tournament.
Cal B was almost as good, as was Vandy A. Vandy A looked much more
impressive in the playoffs. These four teams are of roughly equal
ability and I wouldn't be surprised to see any one of them win a specific
game. Cal C is almost as good as the other four, though they seem a definite
notch lower. That team would also be a threat to win any tournament.

The drop off to the next teams is a bit severe. Michigan and Stanford
finished just above .500, each beating one higher team (Michigan defeated
Cal A and Stanford defeated Cal C). Our actual regional team will be
substantially better, though I don't know to what extent. I don't know
who Michigan will be sending to regionals.

The 7-8 teams {Fresno B, Vandy B, and Georgia State} beat none of the
top five teams and knocked off Stanford twice and Michigan once. Georgia State
seemed to be having trouble with neg fives, perhaps as our questions were
atypical for them.

BYU B beat Michigan and BYU C downed Stanford; otherwise, the remaining
teams lost to all .500 + teams. Many of these teams played quality
games, the most extreme example being Davis B's close loss to fourth-place
Berkeley B.

The highest scoring match was BYU over Davis 670-75. The lowest
scoring was 100-95. I don't have average scores or scores-by-packet
yet, those will be generated along with official statistics. The
average number of toss-ups read per round was 22.9, a bit below
traditional CBI levels. Moderators ranged from 19.0 to 24.6. We
never exceeded 27 in a game. The avergae value of a bonus was 26.5.
How does this compare with CBI bonus numbers? [Tom?] How does this
number (factored by 8/7ths) compare to Penn Bowl? [Pat?] I've come to
believe that the single three most important metrics in evaluating how
close a tournament comes to CBI style (which is our goal with this
tournament) is the 1) number of toss-ups read per round, 2) percent of
toss-ups answered per round, and 3) average bonus conversion. A low
toss-up-per-round figure indicates long questions (especially long
boni) or slow moderating. A low answer percentage indicates toss-ups
are too hard. A low bonus conversion rate indicates boni are too
hard. I'll compute metrics (2) and (3) when the data is available.

The most controversial protest of the tournament arose in the semifinals.
It ended up not affecting the outcome, but the protest committee was split
on the ruling. The key point is when andswerers should be prompted for
more information. The question in dispute described the game Doom II.
The question requested the moderator to prompt if "Doom" were answered.
What do you think? The arguments appear to boil down to the monarch analogy
and the John Adams analogy. With monarchs, it is quite standard to
prompt for more information. An answer "Elizabeth" is assumed to include
either QE1 or QE2; clarification is requested. If, however, one answers
"John Adams", no clarification is needed since that answer is not treated as
a subset of answers, one of which is "John Quincy Adams." If one were
allowed to say "John Adams" and be prompted for "Quincy", one would never
miss such a question since you would not be prompted for the middle name of
the other guy, whatever that is. To my mind, Doom is just like John Adams
and the movie The Godfather. No prompt is appropriate. The committee didn't
see it that way, ruling that prompting was appropriate. I don't think
this situation is adequately treated in the rules. Official CBI questions
indicate whether or not one should prompt or not, but their decisions
in the matter are haphazard in any event. What do people think is the proper
prompting rule?

I'll leave it to others to comment on the tournament logistics. Nothing
seemed to go terribly wrong from my perspective, and I hope participating
teams enjoyed themselves. Let me publicly state that Gerard did most
of the work for this tournament, including most of the question editing.
To the extent that it went well, he is to thank.

Tom Michael

unread,
Feb 22, 1995, 9:42:58 AM2/22/95
to
In article <bone.79...@foghorn.stanford.edu>,
Doug Bone <bo...@foghorn.stanford.edu> wrote:

{deletia}

>The highest scoring match was BYU over Davis 670-75. The lowest
>scoring was 100-95. I don't have average scores or scores-by-packet
>yet, those will be generated along with official statistics. The
>average number of toss-ups read per round was 22.9, a bit below
>traditional CBI levels.

The average might be below some RCT levels, but it seems to be on track
with the pace they desire from moderators at the NCT.

>Moderators ranged from 19.0 to 24.6. We
>never exceeded 27 in a game. The avergae value of a bonus was 26.5.
>How does this compare with CBI bonus numbers? [Tom?]

This value is high compared with CBI, which averages between 24 and 25
points of bonus value, and very rarely exceeds an average of 25 in an
individual pack.

>How does this
>number (factored by 8/7ths) compare to Penn Bowl? [Pat?] I've come to
>believe that the single three most important metrics in evaluating how
>close a tournament comes to CBI style (which is our goal with this
>tournament) is the 1) number of toss-ups read per round, 2) percent of
>toss-ups answered per round, and 3) average bonus conversion. A low
>toss-up-per-round figure indicates long questions (especially long
>boni) or slow moderating. A low answer percentage indicates toss-ups
>are too hard. A low bonus conversion rate indicates boni are too
>hard. I'll compute metrics (2) and (3) when the data is available.

I agree that this seems an excellent method for determining how close
you have come to recreating CBI tournament conditions. I wish more
tournaments would calculate this information.

--
Tom Michael - University of Virginia Office of Continuing Medical Education
and Coach, University Union College Bowl Team. I speak only for myself.
"We're not idiots - just buzz in like a maniac and trust that it's there."
_The Computer Wore Tennis Shoes_ (1995)

Patrick Matthews

unread,
Feb 22, 1995, 9:46:19 AM2/22/95
to
Doug Bone (bone@foghorn) wrote:

[snip]

>yet, those will be generated along with official statistics. The
>average number of toss-ups read per round was 22.9, a bit below
>traditional CBI levels. Moderators ranged from 19.0 to 24.6. We
>never exceeded 27 in a game. The avergae value of a bonus was 26.5.
>How does this compare with CBI bonus numbers? [Tom?] How does this
>number (factored by 8/7ths) compare to Penn Bowl? [Pat?]

Prorating out to eight minute halves, the average CC5 game had more
tossups read than the average PB5 game, by roughly 1.5 tossups/game.
(CC5's rate translates to 26.17 TU/g over eight minute halves, as used at
PB5.) I attribute that mainly to two factors:

1. Better average moderator quality at CC5 (they have an *enormous* pool
of talented officials out there). While I feel the PB5 staff was
competent, the CC5 staff was undoubtedly more experienced.

2. Longer boni. There were fewer one-part boni at PB5 than at PB4, and it
makes a big difference. It's not necessarily a bad thing, as I agree
that great care must be taken with one-part, one-answer boni. However,
ESPECIALLY for timed tourneys, I think more questions should be written
as (or edited into) one-part, multi-answer boni.

A third factor that may or may not have been at work is that, with a
smaller field, the CC5 may not have had as many of the middle-of-the-road
or below average teams as PB5 had. With 64 teams, you get not only most
of the cream, but also an awful lot of also-rans. I'm not sure what
effect, if any, this may have had on field averages.

I believe bonus value averaged somewhere in the 27's, and that the
percentage of tossups answered was a whisker below 90% (I don't have the
figures right in front of me). Without the file, I don't know what
neighborhood bonus conversion was in.

>I've come to
>believe that the single three most important metrics in evaluating how
>close a tournament comes to CBI style (which is our goal with this
>tournament) is the 1) number of toss-ups read per round, 2) percent of
>toss-ups answered per round, and 3) average bonus conversion. A low
>toss-up-per-round figure indicates long questions (especially long
>boni) or slow moderating. A low answer percentage indicates toss-ups
>are too hard. A low bonus conversion rate indicates boni are too
>hard. I'll compute metrics (2) and (3) when the data is available.

In general, I agree with Doug on the above. Of course, the goal for PB5
was to surpass what CBI gives us: we emulate their style in terms of
question length, but the PB5 questions are harder than your average set
of CBI questions and the PB5 questions featured more concrete clues, IMHO.

Pat
--
Patrick G. Matthews matt...@netaxs.com
271 S. 15th St. #1804 215-546-1108 home
Phila., PA 19102 215-299-7524 work
Never feed the hand that bites you. 215-299-7523 fax

0 new messages