I think it's a bit confusing that even with correct settings some tests
always fail for some backends. If I happen to compile any other software
package and the test suite fails, I usually suspect something went wrong. In
addition, failing tests make it hard to verify that everything is alright
with the package if you do any automated tests, e.g. before deploying a new
version.
So--what do you think about skipping tests that *must* fail?
Michael
--
noris network AG - Deutschherrnstraße 15-19 - D-90429 Nürnberg -
Tel +49-911-9352-0 - Fax +49-911-9352-100
http://www.noris.de - The IT-Outsourcing Company
Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel, Hansjochen Klenk -
Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689
AFAIK, there aren't any tests that are "supposed" to be failing; just
a few backend-dependent bugs that need to be tracked down.
As a general principle, though, skipping tests that must fail is a
good idea; I'm not sure, though, if our test suite supports that.
Jacob
>
> On 7/6/07, Michael Radziej <m...@noris.de> wrote:
> > So--what do you think about skipping tests that *must* fail?
>
> AFAIK, there aren't any tests that are "supposed" to be failing; just
> a few backend-dependent bugs that need to be tracked down.
The serializer tests don't work for mysql/mysql_old with a transaction enabled
mysql storage engine since mysql doesn't test foreign key constraints at commit
time but at row insertion time. There's nothing that could fix this ...
> As a general principle, though, skipping tests that must fail is a
> good idea; I'm not sure, though, if our test suite supports that.
I have patches that simply skip tests via 'if'. I guess you'd prefer
something such that you get a report line "xxx tests skipped since they
don't work for %(reason)s"?
Duh - I knew that. Sorry.
> I have patches that simply skip tests via 'if'. I guess you'd prefer
> something such that you get a report line "xxx tests skipped since they
> don't work for %(reason)s"?
Yeah, that would be much better -- see the Python test suite, for
example; it gives a report at the end like "200 tests passed; 14
skipped; 1 failure".
Jacob
> Yeah, that would be much better -- see the Python test suite, for
> example; it gives a report at the end like "200 tests passed; 14
> skipped; 1 failure".
http://code.djangoproject.com/ticket/4788
May I count your reply as "Approved"?
We do that in quite a few places if you rummage through the test suite.
Grep for settings.DATABASE, etc. The remaining places are just waiting
for somebody to submit a clean patch, so go for it. It's not completely
trivial for the MySQL transaction case, because you'll need to check the
database options setting as well to ensure they aren't using InnoDB
(which I think Andy mentioned at one point was respected by the tests,
but I haven't checked that recently)
Regards,
Malcolm
--
Depression is merely anger without enthusiasm.
http://www.pointy-stick.com/blog/