Depends on whether you look at it intrinsically or extrinsically.
Extrinsically, it's a regression.
From an intrinsical perspective:
If it's the same reason, then it's a regression.
If it's a new reason, it's a new problem that happens to be triggered by
the same code sample.
I wouldn't place too much weight on definitions though.
The most important point here is: What decision gives the best
effort-to-effect ratio?
Personally, I do not like slow tests. Practice has shown that they tend
to be ignored, and they aren't even always run before a merge. So either
the effort is high, or the effect is low.
I'd actually like to see slow tests discouraged, as a policy decision.
I think slow tests are acceptable where the test is really important, or
part of some in-progress development effort where performance is
expected to go up.
For regressions, they're simply not worth it, at least in my eyes.
Actually, some people advocate eliminating unit tests if they have been
exercized so thoroughly that they're unlikely to ever trigger again:
their information value has decreased, and they hamper architectural
changes because they tend to need adaptation.
I haven't seen such a policy implemented and can't judge its good and
bad points. Also, SymPy is dealing in mathematical truths, so unit tests
probably won't ever become logically irrelevant (though they could
become technically irrelevant: a regression test might have been
triggered by failures in code that's long gone, so the regression test
does not fulfill any useful purpose anymore).