Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Symposium; The Evolution of Judging Performance

9 views
Skip to first unread message

Jeff Mitchell

unread,
Dec 5, 1997, 3:00:00 AM12/5/97
to

The Evolution of Judging Performance

The system of judging junior drum and bugle has been one of the most
discussed, yet least understood aspects of our activity. This paper will
examine the development of the judging systems over the years, with an
emphasis on the rationale for the changes. It is hoped that this will
further the knowledge of the drum corps community about those who wear
the green shirt. It is an honorable pursuit and one of the most
challenging tasks I have ever undertaken. I and my judging compatriots
have dedicated much time, effort, and toil to this endeavor.

There are three basic era's of judging performance. Remarkably, General
Effect has survived essentially unchanged since the earliest days of
judging, despite being shrouded in continual controversy. Trends in
performance judging can be defined as follows;

I- The Age of Ticks (Post WWII-1969)
II- The Era of Transition (1970-1983)
III- The Evaluation of Performance (1984-present)

I- The Age of Ticks

In the beginning, there was the tick. The tick was good. The tick
determined performance scores for Brass, Drums, and Marching &
Maneuvering, by deducting of one-tenth of a point for each noted
transgression. Execution required two judges for each caption, both were
positioned on the field. They assigned each judge a side, one or two, to
start the contest, then required them to switch at the approximate
midpoint of the contest. The judge's scores were then averaged to
determine the final caption brass, drum , and M&M scores.

The captions were titled, appropriately, Execution and judges frequently
wore black and blue. The commencement of the first note or step brought
forth a pistol shot by the Timing & Penalties Judge beginning eleven and
one-half minutes of scrutiny by the panel. At 11:30 a second pistol shot
was fired to end the performance judging. The Timing & Penalties judge
also deducted points for boundary violations, flag code violations, and
the infamous dropped equipment. A tabulator then tallied the sheets,
counting each tick mark and subtracting the number of errors from a
perfect score. The scoring system usually employed was that of the
American Legion and points were allocated as follows.

Brass Execution 25 points
M&M Execution 25 points
Percussion Execution 20 points
General Effect Brass 10 points
General Effect M&M 10 points
General Effect Percussion 10 points
Total 100 points

The VFW rules, employed typically at VFW State and National
Championships only, averaged the three GE scores for a 10 point total,
added ten points for inspection and ten points for cadence to total 100
points. This is why scores of 87.333 were listed for a VFW show. The
inspection was held before the contest and tenths were deducted for hair
touching the collar, dirty shoes, watermarks on the horns, tarnished
cymbals, and other assorted infractions. A corps could literally lose
the show, before it started. Cadence was also a penalty/deduction
caption for falling outside required tempos.

To judge Execution, these concepts were important. The first was a
judge's tolerance. Tolerance was defined as the degree of error deemed
serious enough to be considered a tick. How long did someone need to
hold a note past the release point before it was considered a tick? How
far out of line could someone venture before incurring the wrath of the
judge? How early did a snare attack have to be before we ticked it?
Judges made these decisions on the first corps and then maintained that
tolerance throughout the contest. The tolerance was frequently set on
the worst corps in the contest, who came on first. It should be noted
that judges ticked only the most severe, public errors. From my
experience, they "ticked" between 25-33% of what they heard or saw as
errors. The key was to be able to note whether, the deviation fell
beyond your tolerance (or intolerance) for that given evening.

A second concept was sampling. Identifying weaker performers was easy;
you could follow them about the field and wear your pencil out. A judge
needed to make sure that they sampled each section and performer on a
near equal basis to insure that the resulting number of ticks would be
an equitable representation of that corps' performance. They also
applied sampling to observation of the areas listed of the sheet to
deposit one's ticks. The brass sheet was divided into three main areas,
Method, Timing, and Ensemble. Under the method area, attacks,
articulation, breath control, and releases were some listed categories.
Timing featured attacks, rhythm, releases, etc . . . while the ensemble
area was a place to designate major problems that involved more than one
individual, i.e. not playing together. The judge keyed each tick by
using a code to identify the instrument (snare, soprano, flag), then put
a circle around it and a line leading off the ticking area to write a
comment to help the corps identify the who, where, when, what, why, and
how of the tick. It also helped slow the judge to prevent a corps from
receiving low scores.

The system of tick judging had many advantages for drum corps. First,
all easily understood it. A judge raised his clipboard and everyone knew
what that meant. Audience members could hear the ticks and then see if
the judge hit it. Anyone could get an idea of how the scoring would be
for that given evening by watching the judges. Performers and
instructors also had a tangible set of criteria by which to go. Cleaning
the show was a summer long activity and performers got immediate
reinforcement, both positive and negative, when on the field of
competition. Clean drum solo were legendary. Cleanliness was next to
Godliness.

Judging execution was a valuable learning experience for judges. When
starting, new judges had to learn to tick. It was the backbone of the
system. It is all I did for two years. What one learned was the art of
critical listening or observation. Hours were spent in total silence
looking and listening making constant, continual judgements about
performance. It was a Zen-like state, totally immersed in the sound,
simply listening to everything and nothing. One did not say, "Well now,
I will listen to rhythm," one simply got in the zone and experienced the
performance. When a judge was on, one did not think, "Was that a tick?"
The pencil flew to the sheet instantaneously as the event occurred.

Much of the disagreement between "old-timers" and the newer generation
of corps people regarding the excellence of the activity is based upon
the different ways each generation experiences the show. The tick
generation still sees and listens with a fine discrimination that is the
product of years of very critical listening and observation. This skill
served me well as a professional trumpet player and is employed every
time I hear music. The post-tick 1984 crowd simply does not have the
point of reference.

Yes, the tick was good.

II- The Era of Transition

A multitude of changes in the judging system marked this period
1970-1983 as we moved away from collecting errors to a build-up system
of scoring. While it was never the intended outcome of these changes,
one can see this was an inevitable progression. What drove the changes
away from the tick and toward subjective performance scoring was the
remarkable accomplishments of our drum corps. It is my contention that
judging systems changed to reflect what drum corps has already
accomplished. We have generally accepted the notion that judges and
judging affect the state of the activity as truth, but it works in
reverse.

Content Analysis

In 1970, the first subjective performance caption appeared. It was named
Content Analysis and it was worth five points on the Brass Execution
sheet. We instituted the caption to not unfairly penalize a corps who
played demanding music and were exposing themselves to the greater
likelihood of being ticked. The real reason for this was the G-F bugle
which legalized in 1968, allowed corps to explore far more musically
than previously possible on G- D horns. It should be noted that all the
major advances in judging system changes occurred first in brass, then
later were adopted by percussion and visual. New ground was being
broken and the system reacted to reward it. Those in the lead, reap the
rewards. This notion of compensation for difficulty went through many
transformations. It appeared later on the Analysis sheets as the
Demand/Exposure subcaptions in the early days of DCI and was and still
is the source of much debate.

Music Analysis, Percussion Analysis, and Visual Analysis

In 1972 or 1973, we transformed the Content Analysis subcaption into the
Music Analysis caption, the first addition of a new judge since the post
WWII era of drum corps began. Now performance judging went beyond the
tick. The initial sheet awarded 4.0 for Tone Quality and Intonation, 3.0
for Musicianship, and 3.0 for Content, later dubbed Demand /Exposure for
a total of 10 points. This sheet survived until 1982 and became the
basis for Field Brass and Ensemble Brass in the nine judge system of
1984 that eliminated the tick. Here we were for the first time, talking
about qualities of performance that the tick could not evaluate.

Why? What motivated this change? The answer was simple. Many corps were
spending time tuning their new G-F bugles. There were vast differences
in the quality of corps' sound. The system changed to reward what had
already been accomplished. Corps escaped the key limitations of the G-D
bugle and begin to play more sophisticated programs. The system of
execution only rewarded uniformity, not quality. The G-F Olds Ultra Tone
bugle was, in it's day, the finest bugle on the market (and finally
comparable to a student model brass instrument.) Major differences could
be heard between brass lines. Listen to Sandra Opie's Argonne Rebels
brass lines of the early 70's. The rules of the game were changing to
keep up with the leaders in the activity.

The changes in brass judging did not go unnoticed. First, Percussion
Analysis and then, Visual Analysis appeared raising the judging panels
to 14, including the Timing and Penalties Judge and Tabulator. More
significantly, equipment additions to percussion in the form of bells,
xylophone, and tympani required the same qualitative approach as did the
Brass Caption's Music Analysis sheet. The word Visual appeared for the
first time with major rules changes eliminating the starting line,
finishing line, boundary lines, grounding equipment, and the required
color presentation. This created the same type of explosion seen in
musical end of drum corps.. The shackles were removed and the activity
went through major advances. The rules of the game were changed to
reflect these new trends in design and performance.

After the appearance of the analysis caption, changes occurred in
execution judging. Many errors were not being ticked, yet were heard or
seen by the audience and many ticks on the field were not seen or heard
by the audience. One field execution judge moved from the field to the
press box for what was known as Ensemble Execution for all three
disciplines of judging. The Ensemble Judge became responsible for the
contest that the audience saw and heard.

Changes in design and structure made this a necessary move. Corps had
become larger and program design more complex. The field M&M judge had
difficulty knowing what the drill intended as to interval, shapes of
forms, etc. . . . One could better understand the visual package from
the press box and evaluate the execution from that level. The straight
lines and three man squad moves of the past had gone the way of the
dinosaur. Symmetry ruled the land.

Musically corps were fielding up to seventy brass and thirty-five
percussionists. The requirements of increasingly complex programs, along
with the larger corps, caused more ensemble-related problems. Playing
together and balance became important given the greater field coverage,
changing tempos, and a desire to march at faster tempos. Drum lines
began to march, the pit appeared, and moving around the field judging
required some speed, agility, and daring. The world was changing.

In 1981, the death knell for ticks appeared. Execution judges began to
use cassette tapes to augment the written sheet. We knew it as
"tick-talk." Cost constraints had eliminated the tabulator and timing
and penalties judges. Execution judges went forth equipped with devices
that would beep after the elapse of eleven minutes and thirty seconds,
tape recorder, headset/mike, clipboard, and pencil to cart around the
field.

The tape was, in my opinion, the final nail in the coffin. When
instructors got to hear the process of ticking, judges talking about all
the deviations and then which were ticks and which were not, the myths
about execution began to fall. The belief that as Pepe Notaro once said,
"A tick, is a tick, is a tick!" was shown to be not entirely true, with
all deference to Pepe, my drum corps hero.. A tick was always an error.
However, the increased awareness that not all errors were ticks led to
questioning the comntinued application of this approach. The grey area
was immense. In addition, the increased complexity of design made it
difficult to judge uniformity. Interval was not always equally spaced.
Form was often difficult to assess at close distance. Timing on the
field became an issue with the field spreads, sound pockets, and corps
performing multiple tempos and polyrhythmic music. What sounded like a
tick might, in fact, not be.

Problems with tabulation also were an issue. DCI had eliminated
tabulators and the scoring was done by volunteers provided by show
sponsors. Frequently scoring mistakes occurred and the first thing many
corps did was to check their sheets and recalculate their numbers.
Ticking had gotten cumbersome and the corps had grown to higher levels.
The act of waiting to catch 10-15 ticks on a top corps was seen as less
reliable determinate of outcome. For brass, it was a non-musical measure
of performance judging. When drum corps was populated with few formally
educated performers, instructors, and judges, it worked well. By 1982,
it was uncommon to find judges without music degrees and extensive
non-drum corps musical backgrounds. The tick was dead.

The brass instructors voted to eliminate ticks for the 1983 season. The
result was satisfactory. In 1984, with the nine judge system being
implemented, the tick had vanished, along with the cost of housing
transporting, and paying three judges. Many will argue that this system
of combining objectivity (tick)and subjectivity was the best judging
concept. However, the changing nature of the activity had made the tick
increasingly a less viable indicator of performance judging. The
lessened viability for brass and visual was significant. The shackles
were off and corps were free to challenge their performers and audience.
Many of our percussion brethren still felt the tick worked well for
them, but were unable to muster the support to resist change.

The need to align the judging system for brass, percussion, and visual
is always a focus of systems design. While it may appear organized and
logical, the music and movement aspects have little in common, except
for occurring simultaneously.

III- The Subjective Evaluation of Performance (1984-present)

The elimination of tick judging, coincided with the development of
criteria reference and the delineated scale to determine score. Past
methods of scoring divided each scoring range into five equal areas. A
ten point scale was divided, 0-20 Poor, 20-40 Fair, 40-60 Good, 60-80
Excellent, 80-100 Superior. Each judge had to decide what was poor,
fair, good, excellent, superior, basing this on their practical
experience and individual criteria. This led to widespread fluctuations
in scoring between DCI and local contests. Corps would often score 20-25
points lower at a DCI contest. Uniformity of score was a problem.

In 1983, Brass caption judges had specific criteria to define each
scoring range, thus making a particular set of performance qualitites
worth a set numerical value. Therefore, judges across the land could
reasonably be assured that a 7.1 in New York was a 7.1 elsewhere. In
reality, there was a more regional bias, with scores being standardized
for DCE, DCM, and DCW contests. Each region in the early season had no
idea what levels of performance was really achieving the 7.1 in other
parts of the country, but there was regional uniformity. The criteria is
a guideline for judges, approved by the corps, to determine what factors
are weighed in determining the outcome of contests. These words could be
discussed, argued, and debated to substance to subjectivity.

The current DCI Brass Performance sheet, Musicianship subcaption, has
descriptors for a score in the range of 25- 34 in a 50 point subcaption.
It is defined as follows,

"The players usually achieve meaningful and uniform communication. A
generally successful attempt at dynamic shading and contouring is
audible. An occasionally mechanical approach to expression exists.
Lapses in uniformity of style, idiom, and taste appear for short periods
of time. A sometimes rigid approach to interpretation is present.
Demands requiring above average musical understanding are present
throughout most of the performance. Musical demands of a high degree are
sometimes present."

A performance exactly meeting these descriptors would score a 30. The
definition is the mid-point of the scoring range. There is a specific
criteria for all 6 scoring ranges for each subcaption on every sheet in
use today.

The rationale for delineated ranges for scores is that we are more
likely to agree that a corps falls into a certain box, than to "know"
what good musicianship means for all drum and bugle corps. The use of
criteria allows judges to score contests with a similar view and focus,
rather than the past self-derived meaning of poor, fair, good,
excellent, and superior. In this manner, judge's personal likes,
dislikes, preferences, and idiosyncracies can be minimized and a common
basis for examination can be established. The standards set forth have
helped fuel the growth of musical quality.

The criteria also blend the concept of demand with performance. The
rewards for good performance are compounded when the demand of the
program are high. Performing difficult material exceedingly well is
given the highest score. The demonstration of demand is best done
through performance excellence. Teaching a bugle line to play with
great tone, be reasonably well in tune, perform musically all while
dashing around a football field is not easy. Anyone can play and march
difficult material poorly. It is not hard at all.

The combining of content and performance is today known as achievement.
This is the most basic concept in contemporary judging. We examine both
what the performer is doing and how well it is done. Achievement is what
determines scoring in each subcaption and corps placement. Judging tapes
should provide recognition of what the performers are doing and how well
it is done. A judge might comment, "That is a difficult move with the
sopranos moving quickly back in that file, while tonguing sixteenth
notes in the upper register. You have some work to do there as their
sound is harsh and they are not together." This captures both the "what"
and "how" and provides a judge and corps with information regarding
which scoring box describes the performance..

It should be noted that while the tick is no longer used, judges
withhold credit for poor performances rather than take it away. They
give credit for good performances, rather than not take it away. The
glass is half-full, glass is half- empty philosophical debate is
appropriate here.

The judging system has went through several changes since 1984. Nine
judges were employed from 1984-1987 and 1990 to 1993. Brass, Percussion,
and Visual each had a field judge, an ensemble judge and an effect
judge. In 1988 and 1989, only 6 judges were employed for financial
reasons. The field judge was not used for these two years.

In 1994 a major shift occurred as the panel was reduced to seven. We
combined Brass General Effect and Percussion General Effect into Music
Effect as well as Brass Ensemble and Percussion Ensemble into a Music
Ensemble caption. After all these years one person finally had the
responsibility of judging all the music. Oddly enough, we all listen to
all the music, but judges had to learn not to listening to the other
section. This change in judging methods again was the result of corps
spending much time in presenting total music packages. The recognition
of new standards and adjudication following continues. No longer are
there two separate musical contests. Remarkably this change occurred
with little controversy and has not been the subject of much discussion.
Yet it is probably more of a radical shift than the loss of the tick.

This should give one an understanding of the both how the judging system
has changed over time as well as the underlying factors necessitating
it. Hopefully it will dispel many of the assumptions about judges and
the scoring system. The honor of judging for drum corps carries a great
responsibility and is taken quite seriously. Judges work hard to
understand the activity, it's participants, and the scoring system.
While contest outcomes are not always popular, they are based upon the
standards implemented by the corps themselves. Judges work hard to view
performances in the context of their assigned captions.

0 new messages