by popular concensus, student evaluations are said to be accurate
reflection of teaching ability. i would like input on whether this
holds water when it comes to mathematics service courses? in
particular i am concerned on the topic when it comes to instructors
and adjuncts. references and published case studies would be great.
"accurate indicators" of what, exactly?
Try these handy, time-tested, kill-your-ranking tricks:
1. Be different from your students (a foreign accent helps immensely).
2. Teach an unpopular course (e.g. something hard like mathematics,
to an audience taking it "because it's a requirement").
3. Pick the right section to teach. Example: The 8am calc-1 is filled
with the bright freshmen; the 1pm class was filled by the juniors
who got to register first. Now, why are they only getting
around to calc 1 two years after arrival on campus?
4. Have the evaluations filled out right after a difficult exam is
5. Insist that the evaluations be done using part of the time
allotted for something more popular (e.g. a pre-exam review).
Or, you might prefer to try the inverse tricks instead :-)
Most of us would argue that there are predictable differences in
evaluations which have nothing to do with teaching quality. On the
other hand, there is not always agreement on what 'teaching quality'
means. Consider these tricks, too:
1. Harangue the students repeatedly how about how this material is
necessary for ... success in the next course.
2. Maintain 'professional detachment' so that your primary interaction
with the students is only to provide well-polished lectures and
accurately graded exams.
These will also undercut your popularity, and presumably also your
student evaluations. Unlike the previous set of tricks, though, these
are examples of cases in which popularity is, I think, positively
correlated with being a good teacher. Some will disagree. But in my
experience, students get much more out of a class which they
enjoy. A teacher's "popularity" is usually quite well correlated
with the student's level of happiness in a course. Yes, it may be
that the students are happy only because the instructor tells corny
jokes, and one might object that this is not "good teaching". On the
other hand, if that makes the students actually come to class and
stay awake, is this not an improvement over the "drill-'em-and-kill-'em"
approach or the "sage-on-the-stage" approach?
So I guess my attitude is: "accurate indicators or popularity contests"
might be a false dichotomy.
>by popular concensus, student evaluations are said to be accurate
>reflection of teaching ability.
Not at any college I've ever taught at!
(By the way, what is the difference between a "popular" consensus and
a regular consensus?)
Unpopular consensus: "As you all know, we need to make drastic budget
cuts this year..."
The Scarlet Manuka
One of the best studies was published in _Science_, I believe
in September 1977, on teaching assistants in calculus at UCSD.
The correlation between class performance and rating of the
TA was -.74, using the grades in the firts two quarters as
additional predictor variables.
There were several other articles in _Science_ on the general
topic, with none coming up with the conclusion that they are
at all accurate.
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
hru...@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558
It is not by popular consensus, but it is very hard to
get an administrator to consider that it is not so.
> It is not by popular consensus, but it is very hard to
> get an administrator to consider that it is not so.
A report at ohio state says that evaluations gauge "student
satisfaction" and nothing else.