VIF, definitely -- not pairwise correlations. From an awesome forthcoming 3rd edi of a stats book I happen to know of:
How does one detect multicollinearity? A first step can be looking at pairwise correlations of predictors, but
that’s not even close to enough, as I have written in many a review: High pairwise correlations between
predictors are a sufficient condition for multicollinearity, but not a necessary one. Thus, it is not . ever .
enough, period. One better diagnostic is a statistic called variance inflation factors (VIFs). [... example, how to compute them from parts of a model, blah ...] Now, what VIFs measure is [...] that’s multicollinearity. And this
should explain to you why doing only pairwise correlations as a multicollinearity diagnostic is nothing short of
futile. Briefly and polemically: where’s the
multi in
pairwise correlations? More usefully: Imagine you have a
model with 10 numeric predictors. Then, the pairwise correlation tester checks whether predictor 1 is collinear
by checking it against the 9 other predictors: 1 & 2, 1 & 3, ..., 1 & 10. But maybe predictor 1 isn’t predictable
by one other predictor, but by the combination of predictors 2, 4, 5, 8, and 9? Or maybe one level of a
categorical predictor is highly predictive of something, which might be missed by checking the correlation of
that categorical predictor with all its levels at the same time. The pairwise approach alone really doesn’t do
much: if you’re worried about collinearity, great, I applaud that!, but if you then only check for it with pairwise
correlations, consider your study an automatic revise and resubmit because then, by definition, the reader
won’t know how reliable your model is
[...]
In your case, Earl, the fact that the correlations are significant doesn't matter: that p-value is partially due to the sample size -- what counts for VIF is something else.