Am 06.05.2015 um 21:51 schrieb Aaron Meurer:
> In my experience, you'll want to run the whole test suite without the
> cache (I know it's slow).
Hm. Obviously nobody ever did this since 25 Jun 2014.
I.e.
- not ever locally for anybody
- not ever on Travis
- not during testing for two release candidates
- not during testing for a release (0.7.6)
> A lot of cache related issues only show up
> when the whole test suite is run, because some test will end up
> mutating some state that affects another test.
Doesn't the test runner do a clear_cache before each test?
I saw this happen during doctesting, in code that looked like it's run
for both normal and doctesting, so I've been assuming it always happens.
> It's also very hard to
> predict where a cache related failure will pop up.
Sure.
I'm about a regression, i.e. something that's already known to be fragile.
Look at the diff:
@@ -520,7 +520,7 @@ def _check(ct1, ct2, old):
if old == self.base:
return new**self.exp._subs(old, new)
- if old.func is self.func and self.base is old.base:
+ if old.func is self.func and self.base == old.base:
if self.exp.is_Add is False:
ct1 = self.exp.as_independent(Symbol, as_Add=False)
ct2 = old.exp.as_independent(Symbol, as_Add=False)
It's a subtle error that even a Chris Smith can fall prey to, so I think
this line is just fragile and needs to be tested, uncached, on at least
one path before the next release.
Running tests uncached in general on Travis might work (but might be
unworkable).
Requiring an uncached test run on prerelease might work, too. The
disadvantage here is that release blockers might lie undisovered until
right before a release, and the best person to fix the blocker might be
unavailable.
The final option would be to do cache-related regression tests (and
*regression* tests only) without the cache. More infrastructure work
required, but faster catching with little extra overhead (assuming that
tests can be easily trimmed town to minimal examples).