NRC Rankings give negative weight for more women?

21 views
Skip to first unread message

ChristinaSormani

unread,
Dec 4, 2010, 12:13:29 AM12/4/10
to WomeninMath
The following two articles indicate that the NRC rankings gave
negative weights to departments with more women:

http://pubs.acs.org/cen/coverstory/88/8843cover.html

http://leiterreports.typepad.com/blog/2010/10/more-peculiarities-of-the-nrc-rankings-the-more-women-and-minorities-the-worse-the-program.html

Has anyone investigated this? I have emailed the AAUW to see what
they say. Maybe this is
not in all fields. Is it true in math?

m...@math.utexas.edu

unread,
Dec 4, 2010, 4:04:40 PM12/4/10
to women...@googlegroups.com, WomeninMath
I would be cautious in making any strong interpretations of these results.
I am not entirely sure of the procedure used based just on the articles
provided, but if indeed (as suggested in the second article), a regression
analysis was done on a subset of schools, and then the regression obtained
from them was used to calculate the rankings for the entire set of
schools, all the uncertainties and iffiness in regression comes into play.

In particular, in a situation like this, the predictor variables are often
correlated, and a regression analysis cannot be interpreted as "the effect
of this variable". In other words: The coefficient of one term can (at
best) be interpreted as "the effect of this term, keeping all the others
constant". But given the correlations that naturally occur, "changing one
term while keeping all the others constant" is not possible. The result is
that correlated terms may have coefficients that are misleading -- e.g.,
something correlated with percent of women may have an inflated
coefficient while the coefficient of percent of women may be lowered -- or
it could have worked out vice-versa.

Thus, statements such as "increasing the number of women decreases the
ranking of the department" are at best misleading.

I taught a graduate course in regression for about ten years and am very
aware that regression (perhaps even more than other statistical
techniques) is very often taken to say more than it can possibly say --
and often done sloppily as well.

In fact, as a mathematician teaching statistics, I became so aware of the
misuses and misinterpretations (rarely deliberate) of statistics, that
when I retired I started a website on Common Mistakes in Using Statistics
(http://www.ma.utexas.edu/users/mks/statmistakes/StatisticsMistakes.html;
for mistakes involving regression coefficients, see
http://www.ma.utexas.edu/users/mks/statmistakes/regressioncoeffs.html),
and giving yearly workshops on the topic.

Martha Smith

> --
> You received this message because you are subscribed to the Google Groups
> "WomeninMath" group.
> To post to this group, send email to women...@googlegroups.com.
> To unsubscribe from this group, send email to
> womeninmath...@googlegroups.com. For more options, visit this
> group at http://groups.google.com/group/womeninmath?hl=en.
>
>


ChristinaSormani

unread,
Dec 5, 2010, 11:24:33 AM12/5/10
to WomeninMath

> In particular, in a situation like this, the predictor variables are often
> correlated, and a regression analysis cannot be interpreted as "the effect
> of this variable". In other words: The coefficient of one term can (at
> best) be interpreted as "the effect of this term, keeping all the others
> constant". But given the correlations that naturally occur, "changing one
> term while keeping all the others constant" is not possible. The result is
> that correlated terms may have coefficients that are misleading -- e.g.,
> something correlated with percent of women may have an inflated
> coefficient while the coefficient of percent of women may be lowered -- or
> it could have worked out vice-versa.

I agree that departments with women mathematicians that have funding
and
awards and excellent records all around will benefit in the rankings
despite the
negative coefficient of the one term involving percent of women on the
faculty.
But one must examine the repercussions of a negative coefficient like
this.

Supposing a small department decides to do some hiring based 100% on
improving
their rankings. A single well chosen hire in a small department could
substantially boost
the department's ranking and it would be easy to request a line for
such a purpose from the
administration. This department could create a spreadsheet and judge
all candidates purely
on whether the data this person adds to the department would improve
the ratings.

Certainly if there is a negative coefficient for having women faculty,
this would put women
applicants at a disadvantage. One would have to question the
legality of such a hiring process.

If not for this coefficient, hiring based on improving rankings would
perhaps have been
one of the first truly gender blind systems of hiring put into place.

ChristinaSormani

unread,
Dec 5, 2010, 11:53:23 AM12/5/10
to WomeninMath
You can also download the complete data and analysis used by the NRC:

http://www.nap.edu/rdp/#download

They are based on 20 factors:

-- Publications per allocated faculty member (past five years)
-- Citations per publication (of papers in past five years)
-- Percent faculty with grants (in past five years)
-- Awards per allocated faculty member
-- Percent interdisciplinary faculty
-- Percent non-Asian minority faculty
-- Percent female faculty
-- Average GRE scores
-- Percent 1st-yr. students with full support
-- Percent 1st-yr. students with external funding
-- Percent non-Asian minority students
-- Percent female students
-- Percent international students
-- Average PhDs, 2002 to 2006
-- Average completion percentage
-- Median time to degree
-- Percent students with academic plans
-- Student work space
-- Student health insurance
-- Number of student activities offered

The weight given to each one of these factors was based upon a
regression analysis of rankings that a group
of mathematicians had given to subsets of the complete list of math
departments.

The negative weight given to the percent of women could have been a
consequence of (unconscious) sexism of these mathematicians who were
given the data and told to judge the departments. Or it could have
been a result of a correlation: a consequence of more women being in
lower ranked departments (perhaps due to sexism of higher ranked
departments in the past). Or it could have been the "fault" of the
women: I had a child during that particular five years and so could
have brought my department down due to lower publications during those
five years (although in fact I beat my department's average
regardless). In my department, all the women beat the department
average.

The NRC data is available on a spreadsheet so one could do a study to
see whether there was a correlation in the data
that caused this effect or if this was purely a consequence of the
external rankings of the particular group of
mathematicians that was studied for the regression analysis.

It should be noted that at www.phds.org, one can access the data in a
user friendly way selecting your own weights for different aspects of
a program and pulling up your own rankings based on NRC collected
data. There one may request a department with more women
specifically.

m...@math.utexas.edu

unread,
Dec 6, 2010, 7:52:47 PM12/6/10
to women...@googlegroups.com, WomeninMath
Thanks for giving the website for downloading the data and more
information about how the analysis was down.

I've briefly read the account of the analysis. One thing that stands out
as iffy to me is that they use a stepwise procedure, based on
t-statistics, to arrive at the weights/regression coefficients. The major
problem with this is that it uses multiple inference without adjusting for
the number of t-tests performed. My impression is that this is still often
done in the social sciences (despite the fact that it is not justified by
the logic of hypothesis testing and that simulations show that it can lead
to misleading results), although it seems to be used increasingly less
frequently in the sciences, at least in biology (I'm not sure about
engineering). However, this dubious practice may be partly or largely
mitigated by the fact that they use a nonparametric method of calculating
confidence intervals (which they call Ninety Percent Ranges) of the
rankings.

The bottom line in any event: It's important to get away from thinking in
terms of single numbers for rankings, but instead to think in terms of the
ranking range, as reflecting the inherent uncertainty in giving rankings.
(For example, my department's ranking should be considered as "in the
range from roughly 9 to 30" for the regression ranking, and "in the range
from roughly 12 to 32) for the survey ranking. I lot of people don't like
this, but it is much more honest/realistic than giving a single number
ranking.

A corollary: Interpreting a specific weight/regression coefficient doesn't
make a whole lot of sense in the broader context.

Martha

Beata Randrianantoanina

unread,
Jan 2, 2011, 7:41:27 PM1/2/11
to women...@googlegroups.com
Have you seen this study?

http://www.futurity.org/top-stories/do-recommendations-cost-women-jobs/

Happy New Year,

Beata Randrianantoanina

Reply all
Reply to author
Forward
0 new messages